Test Report: KVM_Linux_crio 22081

                    
                      502ebf1e50e408071a7e5daf27f82abd53674654:2025-12-09:42698
                    
                

Test fail (15/431)

x
+
TestAddons/parallel/Ingress (159.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:269: (dbg) Run:  kubectl --context addons-712341 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:294: (dbg) Run:  kubectl --context addons-712341 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:307: (dbg) Run:  kubectl --context addons-712341 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:312: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [56885611-8b41-4e56-b6f9-8cc75bfdbfd9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [56885611-8b41-4e56-b6f9-8cc75bfdbfd9] Running
addons_test.go:312: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.005336463s
I1209 01:58:52.641123  258854 kapi.go:150] Service nginx in namespace default found.
addons_test.go:324: (dbg) Run:  out/minikube-linux-amd64 -p addons-712341 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:324: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-712341 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.794406633s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:340: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:348: (dbg) Run:  kubectl --context addons-712341 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p addons-712341 ip
addons_test.go:359: (dbg) Run:  nslookup hello-john.test 192.168.39.107
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-712341 -n addons-712341
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-712341 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-712341 logs -n 25: (1.306509604s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-045512                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-045512 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ start   │ --download-only -p binary-mirror-413418 --alsologtostderr --binary-mirror http://127.0.0.1:33411 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-413418 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ delete  │ -p binary-mirror-413418                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-413418 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ addons  │ enable dashboard -p addons-712341                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-712341                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ start   │ -p addons-712341 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:58 UTC │
	│ addons  │ addons-712341 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
	│ addons  │ addons-712341 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
	│ addons  │ enable headlamp -p addons-712341 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
	│ addons  │ addons-712341 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
	│ addons  │ addons-712341 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
	│ addons  │ addons-712341 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
	│ ip      │ addons-712341 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
	│ addons  │ addons-712341 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
	│ addons  │ addons-712341 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
	│ ssh     │ addons-712341 ssh cat /opt/local-path-provisioner/pvc-5f1d4e27-646c-4ec7-9bd6-c32e7c190c45_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
	│ addons  │ addons-712341 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:59 UTC │
	│ addons  │ addons-712341 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
	│ ssh     │ addons-712341 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │                     │
	│ addons  │ addons-712341 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-712341                                                                                                                                                                                                                                                                                                                                                                                         │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
	│ addons  │ addons-712341 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
	│ addons  │ addons-712341 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
	│ addons  │ addons-712341 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
	│ ip      │ addons-712341 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-712341        │ jenkins │ v1.37.0 │ 09 Dec 25 02:01 UTC │ 09 Dec 25 02:01 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 01:55:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 01:55:51.913915  259666 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:55:51.914035  259666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:51.914042  259666 out.go:374] Setting ErrFile to fd 2...
	I1209 01:55:51.914049  259666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:51.914237  259666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 01:55:51.914843  259666 out.go:368] Setting JSON to false
	I1209 01:55:51.915755  259666 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27502,"bootTime":1765217850,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 01:55:51.915834  259666 start.go:143] virtualization: kvm guest
	I1209 01:55:51.917867  259666 out.go:179] * [addons-712341] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 01:55:51.919297  259666 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 01:55:51.919305  259666 notify.go:221] Checking for updates...
	I1209 01:55:51.922532  259666 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 01:55:51.924042  259666 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 01:55:51.925428  259666 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	I1209 01:55:51.926969  259666 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 01:55:51.928424  259666 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 01:55:51.929898  259666 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 01:55:51.961030  259666 out.go:179] * Using the kvm2 driver based on user configuration
	I1209 01:55:51.962300  259666 start.go:309] selected driver: kvm2
	I1209 01:55:51.962315  259666 start.go:927] validating driver "kvm2" against <nil>
	I1209 01:55:51.962328  259666 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 01:55:51.963041  259666 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 01:55:51.963291  259666 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 01:55:51.963317  259666 cni.go:84] Creating CNI manager for ""
	I1209 01:55:51.963358  259666 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 01:55:51.963368  259666 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 01:55:51.963408  259666 start.go:353] cluster config:
	{Name:addons-712341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-712341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1209 01:55:51.963506  259666 iso.go:125] acquiring lock: {Name:mk5e3a22cdf6cd1ed24c9a04adaf1049140c04b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 01:55:51.965076  259666 out.go:179] * Starting "addons-712341" primary control-plane node in "addons-712341" cluster
	I1209 01:55:51.966686  259666 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 01:55:51.966721  259666 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 01:55:51.966729  259666 cache.go:65] Caching tarball of preloaded images
	I1209 01:55:51.966818  259666 preload.go:238] Found /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 01:55:51.966851  259666 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 01:55:51.967214  259666 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/config.json ...
	I1209 01:55:51.967244  259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/config.json: {Name:mkbc318e9832bd68097f4bd0339c0ce1fe587cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:55:51.967430  259666 start.go:360] acquireMachinesLock for addons-712341: {Name:mkb4bf4bc2a6ad90b53de9be214957ca6809cd32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 01:55:51.967505  259666 start.go:364] duration metric: took 53.333µs to acquireMachinesLock for "addons-712341"
	I1209 01:55:51.967530  259666 start.go:93] Provisioning new machine with config: &{Name:addons-712341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-712341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 01:55:51.967609  259666 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 01:55:51.970193  259666 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1209 01:55:51.970407  259666 start.go:159] libmachine.API.Create for "addons-712341" (driver="kvm2")
	I1209 01:55:51.970444  259666 client.go:173] LocalClient.Create starting
	I1209 01:55:51.970559  259666 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem
	I1209 01:55:52.007577  259666 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem
	I1209 01:55:52.072361  259666 main.go:143] libmachine: creating domain...
	I1209 01:55:52.072386  259666 main.go:143] libmachine: creating network...
	I1209 01:55:52.074044  259666 main.go:143] libmachine: found existing default network
	I1209 01:55:52.074296  259666 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1209 01:55:52.074887  259666 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d3e060}
	I1209 01:55:52.075028  259666 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-712341</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1209 01:55:52.081172  259666 main.go:143] libmachine: creating private network mk-addons-712341 192.168.39.0/24...
	I1209 01:55:52.157083  259666 main.go:143] libmachine: private network mk-addons-712341 192.168.39.0/24 created
	I1209 01:55:52.157424  259666 main.go:143] libmachine: <network>
	  <name>mk-addons-712341</name>
	  <uuid>0556de88-06ab-485d-9c24-8217acb00de5</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:ce:5b:fb'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1209 01:55:52.157458  259666 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341 ...
	I1209 01:55:52.157492  259666 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22081-254936/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
	I1209 01:55:52.157507  259666 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22081-254936/.minikube
	I1209 01:55:52.157593  259666 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22081-254936/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22081-254936/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso...
	I1209 01:55:52.421048  259666 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa...
	I1209 01:55:52.570398  259666 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/addons-712341.rawdisk...
	I1209 01:55:52.570456  259666 main.go:143] libmachine: Writing magic tar header
	I1209 01:55:52.570484  259666 main.go:143] libmachine: Writing SSH key tar header
	I1209 01:55:52.570557  259666 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341 ...
	I1209 01:55:52.570616  259666 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341
	I1209 01:55:52.570655  259666 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341 (perms=drwx------)
	I1209 01:55:52.570669  259666 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22081-254936/.minikube/machines
	I1209 01:55:52.570679  259666 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22081-254936/.minikube/machines (perms=drwxr-xr-x)
	I1209 01:55:52.570687  259666 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22081-254936/.minikube
	I1209 01:55:52.570699  259666 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22081-254936/.minikube (perms=drwxr-xr-x)
	I1209 01:55:52.570710  259666 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22081-254936
	I1209 01:55:52.570725  259666 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22081-254936 (perms=drwxrwxr-x)
	I1209 01:55:52.570735  259666 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1209 01:55:52.570742  259666 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 01:55:52.570753  259666 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1209 01:55:52.570760  259666 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 01:55:52.570771  259666 main.go:143] libmachine: checking permissions on dir: /home
	I1209 01:55:52.570778  259666 main.go:143] libmachine: skipping /home - not owner
	I1209 01:55:52.570782  259666 main.go:143] libmachine: defining domain...
	I1209 01:55:52.572359  259666 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-712341</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/addons-712341.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-712341'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1209 01:55:52.577681  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:07:49:0d in network default
	I1209 01:55:52.578359  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:55:52.578380  259666 main.go:143] libmachine: starting domain...
	I1209 01:55:52.578385  259666 main.go:143] libmachine: ensuring networks are active...
	I1209 01:55:52.579326  259666 main.go:143] libmachine: Ensuring network default is active
	I1209 01:55:52.579778  259666 main.go:143] libmachine: Ensuring network mk-addons-712341 is active
	I1209 01:55:52.580598  259666 main.go:143] libmachine: getting domain XML...
	I1209 01:55:52.581902  259666 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-712341</name>
	  <uuid>870ec28c-5b88-46bc-b908-87091429a736</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/addons-712341.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:c8:8f:0e'/>
	      <source network='mk-addons-712341'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:07:49:0d'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1209 01:55:53.877327  259666 main.go:143] libmachine: waiting for domain to start...
	I1209 01:55:53.878860  259666 main.go:143] libmachine: domain is now running
	I1209 01:55:53.878883  259666 main.go:143] libmachine: waiting for IP...
	I1209 01:55:53.879804  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:55:53.880424  259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
	I1209 01:55:53.880457  259666 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:55:53.880867  259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
	I1209 01:55:53.880926  259666 retry.go:31] will retry after 265.397085ms: waiting for domain to come up
	I1209 01:55:54.148713  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:55:54.149545  259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
	I1209 01:55:54.149566  259666 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:55:54.149932  259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
	I1209 01:55:54.150001  259666 retry.go:31] will retry after 307.385775ms: waiting for domain to come up
	I1209 01:55:54.458653  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:55:54.459509  259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
	I1209 01:55:54.459528  259666 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:55:54.460047  259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
	I1209 01:55:54.460093  259666 retry.go:31] will retry after 395.041534ms: waiting for domain to come up
	I1209 01:55:54.856811  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:55:54.857628  259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
	I1209 01:55:54.857646  259666 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:55:54.858038  259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
	I1209 01:55:54.858082  259666 retry.go:31] will retry after 374.275906ms: waiting for domain to come up
	I1209 01:55:55.233758  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:55:55.234551  259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
	I1209 01:55:55.234570  259666 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:55:55.234982  259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
	I1209 01:55:55.235028  259666 retry.go:31] will retry after 747.649275ms: waiting for domain to come up
	I1209 01:55:55.984035  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:55:55.984743  259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
	I1209 01:55:55.984755  259666 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:55:55.985073  259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
	I1209 01:55:55.985109  259666 retry.go:31] will retry after 865.91237ms: waiting for domain to come up
	I1209 01:55:56.852567  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:55:56.853208  259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
	I1209 01:55:56.853229  259666 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:55:56.853581  259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
	I1209 01:55:56.853621  259666 retry.go:31] will retry after 1.052488212s: waiting for domain to come up
	I1209 01:55:57.908017  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:55:57.908872  259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
	I1209 01:55:57.908903  259666 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:55:57.909276  259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
	I1209 01:55:57.909322  259666 retry.go:31] will retry after 1.187266906s: waiting for domain to come up
	I1209 01:55:59.098780  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:55:59.099456  259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
	I1209 01:55:59.099474  259666 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:55:59.099856  259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
	I1209 01:55:59.099900  259666 retry.go:31] will retry after 1.462600886s: waiting for domain to come up
	I1209 01:56:00.564917  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:00.565697  259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
	I1209 01:56:00.565718  259666 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:56:00.566186  259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
	I1209 01:56:00.566236  259666 retry.go:31] will retry after 1.786857993s: waiting for domain to come up
	I1209 01:56:02.355216  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:02.356156  259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
	I1209 01:56:02.356186  259666 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:56:02.356607  259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
	I1209 01:56:02.356674  259666 retry.go:31] will retry after 2.31997202s: waiting for domain to come up
	I1209 01:56:04.678970  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:04.679666  259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
	I1209 01:56:04.679684  259666 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:56:04.680272  259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
	I1209 01:56:04.680321  259666 retry.go:31] will retry after 3.342048068s: waiting for domain to come up
	I1209 01:56:08.024041  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:08.024748  259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
	I1209 01:56:08.024764  259666 main.go:143] libmachine: trying to list again with source=arp
	I1209 01:56:08.025270  259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
	I1209 01:56:08.025321  259666 retry.go:31] will retry after 4.37421634s: waiting for domain to come up
	I1209 01:56:12.400710  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:12.401456  259666 main.go:143] libmachine: domain addons-712341 has current primary IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:12.401476  259666 main.go:143] libmachine: found domain IP: 192.168.39.107
	I1209 01:56:12.401485  259666 main.go:143] libmachine: reserving static IP address...
	I1209 01:56:12.401874  259666 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-712341", mac: "52:54:00:c8:8f:0e", ip: "192.168.39.107"} in network mk-addons-712341
	I1209 01:56:12.607096  259666 main.go:143] libmachine: reserved static IP address 192.168.39.107 for domain addons-712341
	I1209 01:56:12.607121  259666 main.go:143] libmachine: waiting for SSH...
	I1209 01:56:12.607140  259666 main.go:143] libmachine: Getting to WaitForSSH function...
	I1209 01:56:12.610473  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:12.611080  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:12.611125  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:12.611385  259666 main.go:143] libmachine: Using SSH client type: native
	I1209 01:56:12.611719  259666 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I1209 01:56:12.611734  259666 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1209 01:56:12.744903  259666 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 01:56:12.745324  259666 main.go:143] libmachine: domain creation complete
	I1209 01:56:12.746730  259666 machine.go:94] provisionDockerMachine start ...
	I1209 01:56:12.749465  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:12.749882  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:12.749908  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:12.750143  259666 main.go:143] libmachine: Using SSH client type: native
	I1209 01:56:12.750389  259666 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I1209 01:56:12.750402  259666 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 01:56:12.871555  259666 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 01:56:12.871592  259666 buildroot.go:166] provisioning hostname "addons-712341"
	I1209 01:56:12.874844  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:12.875400  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:12.875435  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:12.875691  259666 main.go:143] libmachine: Using SSH client type: native
	I1209 01:56:12.875907  259666 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I1209 01:56:12.875921  259666 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-712341 && echo "addons-712341" | sudo tee /etc/hostname
	I1209 01:56:13.015342  259666 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-712341
	
	I1209 01:56:13.018668  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:13.019132  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:13.019166  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:13.019363  259666 main.go:143] libmachine: Using SSH client type: native
	I1209 01:56:13.019625  259666 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I1209 01:56:13.019642  259666 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-712341' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-712341/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-712341' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 01:56:13.150128  259666 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 01:56:13.150174  259666 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22081-254936/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-254936/.minikube}
	I1209 01:56:13.150238  259666 buildroot.go:174] setting up certificates
	I1209 01:56:13.150249  259666 provision.go:84] configureAuth start
	I1209 01:56:13.153162  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:13.153669  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:13.153697  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:13.156274  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:13.156657  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:13.156679  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:13.156810  259666 provision.go:143] copyHostCerts
	I1209 01:56:13.156932  259666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/ca.pem (1078 bytes)
	I1209 01:56:13.157133  259666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/cert.pem (1123 bytes)
	I1209 01:56:13.157239  259666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/key.pem (1679 bytes)
	I1209 01:56:13.157331  259666 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca-key.pem org=jenkins.addons-712341 san=[127.0.0.1 192.168.39.107 addons-712341 localhost minikube]
	I1209 01:56:13.302563  259666 provision.go:177] copyRemoteCerts
	I1209 01:56:13.302629  259666 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 01:56:13.305577  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:13.306131  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:13.306164  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:13.306378  259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
	I1209 01:56:13.399103  259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 01:56:13.432081  259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 01:56:13.466622  259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 01:56:13.498405  259666 provision.go:87] duration metric: took 348.137312ms to configureAuth
	I1209 01:56:13.498438  259666 buildroot.go:189] setting minikube options for container-runtime
	I1209 01:56:13.498635  259666 config.go:182] Loaded profile config "addons-712341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:56:13.502099  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:13.502553  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:13.502581  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:13.502878  259666 main.go:143] libmachine: Using SSH client type: native
	I1209 01:56:13.503105  259666 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I1209 01:56:13.503123  259666 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 01:56:13.938598  259666 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 01:56:13.938629  259666 machine.go:97] duration metric: took 1.191878481s to provisionDockerMachine
	I1209 01:56:13.938642  259666 client.go:176] duration metric: took 21.968186831s to LocalClient.Create
	I1209 01:56:13.938697  259666 start.go:167] duration metric: took 21.968265519s to libmachine.API.Create "addons-712341"
	I1209 01:56:13.938710  259666 start.go:293] postStartSetup for "addons-712341" (driver="kvm2")
	I1209 01:56:13.938723  259666 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 01:56:13.938814  259666 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 01:56:13.942173  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:13.942615  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:13.942638  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:13.942785  259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
	I1209 01:56:14.035480  259666 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 01:56:14.041119  259666 info.go:137] Remote host: Buildroot 2025.02
	I1209 01:56:14.041165  259666 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-254936/.minikube/addons for local assets ...
	I1209 01:56:14.041259  259666 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-254936/.minikube/files for local assets ...
	I1209 01:56:14.041296  259666 start.go:296] duration metric: took 102.577833ms for postStartSetup
	I1209 01:56:14.079987  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:14.080490  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:14.080525  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:14.080837  259666 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/config.json ...
	I1209 01:56:14.081108  259666 start.go:128] duration metric: took 22.113482831s to createHost
	I1209 01:56:14.083545  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:14.084053  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:14.084082  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:14.084311  259666 main.go:143] libmachine: Using SSH client type: native
	I1209 01:56:14.084523  259666 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I1209 01:56:14.084533  259666 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1209 01:56:14.206933  259666 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765245374.173473349
	
	I1209 01:56:14.206983  259666 fix.go:216] guest clock: 1765245374.173473349
	I1209 01:56:14.206992  259666 fix.go:229] Guest: 2025-12-09 01:56:14.173473349 +0000 UTC Remote: 2025-12-09 01:56:14.081142247 +0000 UTC m=+22.219468127 (delta=92.331102ms)
	I1209 01:56:14.207010  259666 fix.go:200] guest clock delta is within tolerance: 92.331102ms
	I1209 01:56:14.207016  259666 start.go:83] releasing machines lock for "addons-712341", held for 22.239498517s
	I1209 01:56:14.210153  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:14.210598  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:14.210626  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:14.211219  259666 ssh_runner.go:195] Run: cat /version.json
	I1209 01:56:14.211304  259666 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 01:56:14.214631  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:14.215099  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:14.215100  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:14.215160  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:14.215375  259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
	I1209 01:56:14.215685  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:14.215718  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:14.215908  259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
	I1209 01:56:14.324217  259666 ssh_runner.go:195] Run: systemctl --version
	I1209 01:56:14.331424  259666 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 01:56:14.776975  259666 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 01:56:14.786222  259666 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 01:56:14.786312  259666 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 01:56:14.810312  259666 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 01:56:14.810358  259666 start.go:496] detecting cgroup driver to use...
	I1209 01:56:14.810945  259666 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 01:56:14.834532  259666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 01:56:14.855018  259666 docker.go:218] disabling cri-docker service (if available) ...
	I1209 01:56:14.855097  259666 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 01:56:14.873689  259666 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 01:56:14.892588  259666 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 01:56:15.050682  259666 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 01:56:15.201515  259666 docker.go:234] disabling docker service ...
	I1209 01:56:15.201600  259666 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 01:56:15.219273  259666 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 01:56:15.236589  259666 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 01:56:15.461375  259666 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 01:56:15.606689  259666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 01:56:15.623886  259666 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 01:56:15.649306  259666 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 01:56:15.649375  259666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 01:56:15.662394  259666 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 01:56:15.662493  259666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 01:56:15.675986  259666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 01:56:15.689401  259666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 01:56:15.702735  259666 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 01:56:15.718200  259666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 01:56:15.731978  259666 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 01:56:15.756449  259666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 01:56:15.770276  259666 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 01:56:15.782285  259666 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 01:56:15.782357  259666 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 01:56:15.804234  259666 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 01:56:15.817233  259666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 01:56:15.962723  259666 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 01:56:16.084810  259666 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 01:56:16.084937  259666 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 01:56:16.090926  259666 start.go:564] Will wait 60s for crictl version
	I1209 01:56:16.091023  259666 ssh_runner.go:195] Run: which crictl
	I1209 01:56:16.095927  259666 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 01:56:16.136298  259666 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 01:56:16.136400  259666 ssh_runner.go:195] Run: crio --version
	I1209 01:56:16.169730  259666 ssh_runner.go:195] Run: crio --version
	I1209 01:56:16.204330  259666 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1209 01:56:16.208592  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:16.209048  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:16.209074  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:16.209342  259666 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 01:56:16.214627  259666 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 01:56:16.230641  259666 kubeadm.go:884] updating cluster {Name:addons-712341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-712341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 01:56:16.230774  259666 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 01:56:16.230844  259666 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 01:56:16.263569  259666 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1209 01:56:16.263646  259666 ssh_runner.go:195] Run: which lz4
	I1209 01:56:16.268308  259666 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 01:56:16.273636  259666 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 01:56:16.273675  259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1209 01:56:17.705360  259666 crio.go:462] duration metric: took 1.43708175s to copy over tarball
	I1209 01:56:17.705457  259666 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 01:56:19.113008  259666 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.407507771s)
	I1209 01:56:19.113035  259666 crio.go:469] duration metric: took 1.407642549s to extract the tarball
	I1209 01:56:19.113043  259666 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 01:56:19.150009  259666 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 01:56:19.191699  259666 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 01:56:19.191722  259666 cache_images.go:86] Images are preloaded, skipping loading
	I1209 01:56:19.191731  259666 kubeadm.go:935] updating node { 192.168.39.107 8443 v1.34.2 crio true true} ...
	I1209 01:56:19.191895  259666 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-712341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-712341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 01:56:19.192020  259666 ssh_runner.go:195] Run: crio config
	I1209 01:56:19.240095  259666 cni.go:84] Creating CNI manager for ""
	I1209 01:56:19.240118  259666 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 01:56:19.240141  259666 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 01:56:19.240169  259666 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-712341 NodeName:addons-712341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 01:56:19.240343  259666 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-712341"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.107"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 01:56:19.240426  259666 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 01:56:19.253940  259666 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 01:56:19.254021  259666 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 01:56:19.266741  259666 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1209 01:56:19.288805  259666 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 01:56:19.311434  259666 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1209 01:56:19.334386  259666 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I1209 01:56:19.339285  259666 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 01:56:19.355563  259666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 01:56:19.505046  259666 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 01:56:19.536671  259666 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341 for IP: 192.168.39.107
	I1209 01:56:19.536706  259666 certs.go:195] generating shared ca certs ...
	I1209 01:56:19.536731  259666 certs.go:227] acquiring lock for ca certs: {Name:mk538e8c05758246ce904354c7e7ace78887d181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:19.536988  259666 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-254936/.minikube/ca.key
	I1209 01:56:19.588349  259666 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt ...
	I1209 01:56:19.588384  259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt: {Name:mk25984b3e32ec9734e4cda7734262a1d8004f76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:19.588566  259666 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-254936/.minikube/ca.key ...
	I1209 01:56:19.588578  259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/ca.key: {Name:mkdb18c3362861140a9d6339271fb0245c707c4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:19.588653  259666 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.key
	I1209 01:56:19.616688  259666 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.crt ...
	I1209 01:56:19.616718  259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.crt: {Name:mkf201d94ce9a38ac3d2e3ba9845b3ebc459b0cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:19.616892  259666 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.key ...
	I1209 01:56:19.616905  259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.key: {Name:mk4c6834e6ed7ee10958c4e629376a30863c157f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:19.616974  259666 certs.go:257] generating profile certs ...
	I1209 01:56:19.617046  259666 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.key
	I1209 01:56:19.617061  259666 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt with IP's: []
	I1209 01:56:19.675363  259666 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt ...
	I1209 01:56:19.675392  259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: {Name:mk2c6d9f6571abe7785206344ce34d3204c868fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:19.675553  259666 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.key ...
	I1209 01:56:19.675564  259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.key: {Name:mk7400720eb975505d75ffc51097ac8ebc198c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:19.675646  259666 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.key.ed6545a3
	I1209 01:56:19.675667  259666 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.crt.ed6545a3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.107]
	I1209 01:56:19.794358  259666 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.crt.ed6545a3 ...
	I1209 01:56:19.794387  259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.crt.ed6545a3: {Name:mkea62fdcc90205c9f4d045336442f3cf6198861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:19.794553  259666 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.key.ed6545a3 ...
	I1209 01:56:19.794566  259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.key.ed6545a3: {Name:mk47cad7e2e670aeb0c3d5eabd691889e41b7c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:19.794644  259666 certs.go:382] copying /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.crt.ed6545a3 -> /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.crt
	I1209 01:56:19.794713  259666 certs.go:386] copying /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.key.ed6545a3 -> /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.key
	I1209 01:56:19.794760  259666 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/proxy-client.key
	I1209 01:56:19.794777  259666 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/proxy-client.crt with IP's: []
	I1209 01:56:19.883801  259666 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/proxy-client.crt ...
	I1209 01:56:19.883842  259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/proxy-client.crt: {Name:mk1385aa835cc65b91e728b4ed5b58a37ba1a4d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:19.884009  259666 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/proxy-client.key ...
	I1209 01:56:19.884023  259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/proxy-client.key: {Name:mk07fbac3e70b8b1b55759f366cd54064f477753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:19.884203  259666 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 01:56:19.884241  259666 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem (1078 bytes)
	I1209 01:56:19.884270  259666 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem (1123 bytes)
	I1209 01:56:19.884296  259666 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/key.pem (1679 bytes)
	I1209 01:56:19.884926  259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 01:56:19.918215  259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 01:56:19.949658  259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 01:56:19.981549  259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 01:56:20.013363  259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 01:56:20.046556  259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 01:56:20.080767  259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 01:56:20.113752  259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 01:56:20.146090  259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 01:56:20.179542  259666 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 01:56:20.202726  259666 ssh_runner.go:195] Run: openssl version
	I1209 01:56:20.209978  259666 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 01:56:20.228445  259666 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 01:56:20.242414  259666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 01:56:20.249302  259666 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1209 01:56:20.249372  259666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 01:56:20.260886  259666 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 01:56:20.277482  259666 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 01:56:20.293408  259666 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 01:56:20.299356  259666 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 01:56:20.299432  259666 kubeadm.go:401] StartCluster: {Name:addons-712341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-712341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 01:56:20.299521  259666 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 01:56:20.299577  259666 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 01:56:20.338305  259666 cri.go:89] found id: ""
	I1209 01:56:20.338383  259666 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 01:56:20.352247  259666 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 01:56:20.365865  259666 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 01:56:20.379255  259666 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 01:56:20.379277  259666 kubeadm.go:158] found existing configuration files:
	
	I1209 01:56:20.379342  259666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 01:56:20.394007  259666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 01:56:20.394071  259666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 01:56:20.406959  259666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 01:56:20.418745  259666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 01:56:20.418817  259666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 01:56:20.432103  259666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 01:56:20.444308  259666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 01:56:20.444371  259666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 01:56:20.457064  259666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 01:56:20.469264  259666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 01:56:20.469328  259666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 01:56:20.482738  259666 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 01:56:20.539264  259666 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1209 01:56:20.539333  259666 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 01:56:20.651956  259666 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 01:56:20.652108  259666 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 01:56:20.652228  259666 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 01:56:20.663439  259666 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 01:56:20.705449  259666 out.go:252]   - Generating certificates and keys ...
	I1209 01:56:20.705587  259666 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 01:56:20.705702  259666 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 01:56:20.731893  259666 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 01:56:21.123152  259666 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 01:56:21.435961  259666 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 01:56:22.056052  259666 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 01:56:22.330258  259666 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 01:56:22.330731  259666 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-712341 localhost] and IPs [192.168.39.107 127.0.0.1 ::1]
	I1209 01:56:22.534479  259666 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 01:56:22.535545  259666 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-712341 localhost] and IPs [192.168.39.107 127.0.0.1 ::1]
	I1209 01:56:22.839733  259666 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 01:56:23.345878  259666 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 01:56:23.492556  259666 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 01:56:23.492627  259666 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 01:56:23.808202  259666 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 01:56:24.210780  259666 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 01:56:24.519003  259666 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 01:56:24.731456  259666 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 01:56:25.386737  259666 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 01:56:25.389328  259666 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 01:56:25.392681  259666 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 01:56:25.395117  259666 out.go:252]   - Booting up control plane ...
	I1209 01:56:25.395242  259666 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 01:56:25.395330  259666 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 01:56:25.395405  259666 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 01:56:25.413924  259666 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 01:56:25.414050  259666 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 01:56:25.423352  259666 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 01:56:25.423498  259666 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 01:56:25.423566  259666 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 01:56:25.601588  259666 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 01:56:25.601741  259666 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 01:56:27.102573  259666 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501728248s
	I1209 01:56:27.106697  259666 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1209 01:56:27.106805  259666 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.107:8443/livez
	I1209 01:56:27.106916  259666 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1209 01:56:27.107043  259666 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1209 01:56:30.705714  259666 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.599747652s
	I1209 01:56:31.358423  259666 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.252497162s
	I1209 01:56:33.106263  259666 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001710585s
	I1209 01:56:33.128662  259666 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 01:56:33.144350  259666 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 01:56:33.170578  259666 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 01:56:33.172164  259666 kubeadm.go:319] [mark-control-plane] Marking the node addons-712341 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 01:56:33.191436  259666 kubeadm.go:319] [bootstrap-token] Using token: 7em9fe.8onfni9y9x6y6345
	I1209 01:56:33.192896  259666 out.go:252]   - Configuring RBAC rules ...
	I1209 01:56:33.193068  259666 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 01:56:33.199789  259666 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 01:56:33.210993  259666 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 01:56:33.215851  259666 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 01:56:33.220512  259666 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 01:56:33.224892  259666 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 01:56:33.512967  259666 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 01:56:33.964307  259666 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1209 01:56:34.514479  259666 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1209 01:56:34.514515  259666 kubeadm.go:319] 
	I1209 01:56:34.514600  259666 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1209 01:56:34.514614  259666 kubeadm.go:319] 
	I1209 01:56:34.514734  259666 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1209 01:56:34.514750  259666 kubeadm.go:319] 
	I1209 01:56:34.514785  259666 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1209 01:56:34.514916  259666 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 01:56:34.514993  259666 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 01:56:34.515006  259666 kubeadm.go:319] 
	I1209 01:56:34.515081  259666 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1209 01:56:34.515093  259666 kubeadm.go:319] 
	I1209 01:56:34.515176  259666 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 01:56:34.515188  259666 kubeadm.go:319] 
	I1209 01:56:34.515266  259666 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1209 01:56:34.515373  259666 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 01:56:34.515481  259666 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 01:56:34.515492  259666 kubeadm.go:319] 
	I1209 01:56:34.515633  259666 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 01:56:34.515752  259666 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1209 01:56:34.515760  259666 kubeadm.go:319] 
	I1209 01:56:34.515878  259666 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 7em9fe.8onfni9y9x6y6345 \
	I1209 01:56:34.516049  259666 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0be7e0a7baa75d08b526e8b854bf3b813e93f67dd991ef9945e4881192856bde \
	I1209 01:56:34.516087  259666 kubeadm.go:319] 	--control-plane 
	I1209 01:56:34.516096  259666 kubeadm.go:319] 
	I1209 01:56:34.516232  259666 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1209 01:56:34.516257  259666 kubeadm.go:319] 
	I1209 01:56:34.516386  259666 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7em9fe.8onfni9y9x6y6345 \
	I1209 01:56:34.516533  259666 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0be7e0a7baa75d08b526e8b854bf3b813e93f67dd991ef9945e4881192856bde 
	I1209 01:56:34.517919  259666 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 01:56:34.517955  259666 cni.go:84] Creating CNI manager for ""
	I1209 01:56:34.517966  259666 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 01:56:34.519911  259666 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 01:56:34.521458  259666 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 01:56:34.535811  259666 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 01:56:34.571626  259666 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 01:56:34.571716  259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-712341 minikube.k8s.io/updated_at=2025_12_09T01_56_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d minikube.k8s.io/name=addons-712341 minikube.k8s.io/primary=true
	I1209 01:56:34.571716  259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:34.737094  259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:34.778269  259666 ops.go:34] apiserver oom_adj: -16
	I1209 01:56:35.237350  259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:35.737733  259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:36.237949  259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:36.737187  259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:37.237296  259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:37.737971  259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:38.237362  259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:38.737165  259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:39.237194  259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:39.737115  259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 01:56:39.843309  259666 kubeadm.go:1114] duration metric: took 5.271690903s to wait for elevateKubeSystemPrivileges
	I1209 01:56:39.843359  259666 kubeadm.go:403] duration metric: took 19.543933591s to StartCluster
	I1209 01:56:39.843385  259666 settings.go:142] acquiring lock: {Name:mkec34d0133156567c6c6050ab2f8de3f197c63b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:39.843542  259666 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 01:56:39.844035  259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/kubeconfig: {Name:mkaafbe94dbea876978b17d37022d815642e1aad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 01:56:39.844312  259666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 01:56:39.844306  259666 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 01:56:39.844339  259666 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1209 01:56:39.844460  259666 addons.go:70] Setting yakd=true in profile "addons-712341"
	I1209 01:56:39.844469  259666 addons.go:70] Setting inspektor-gadget=true in profile "addons-712341"
	I1209 01:56:39.844487  259666 addons.go:239] Setting addon inspektor-gadget=true in "addons-712341"
	I1209 01:56:39.844489  259666 addons.go:70] Setting registry-creds=true in profile "addons-712341"
	I1209 01:56:39.844495  259666 addons.go:70] Setting storage-provisioner=true in profile "addons-712341"
	I1209 01:56:39.844506  259666 addons.go:239] Setting addon storage-provisioner=true in "addons-712341"
	I1209 01:56:39.844523  259666 addons.go:239] Setting addon registry-creds=true in "addons-712341"
	I1209 01:56:39.844532  259666 host.go:66] Checking if "addons-712341" exists ...
	I1209 01:56:39.844528  259666 addons.go:70] Setting volcano=true in profile "addons-712341"
	I1209 01:56:39.844543  259666 addons.go:70] Setting volumesnapshots=true in profile "addons-712341"
	I1209 01:56:39.844553  259666 addons.go:239] Setting addon volcano=true in "addons-712341"
	I1209 01:56:39.844553  259666 addons.go:239] Setting addon volumesnapshots=true in "addons-712341"
	I1209 01:56:39.844559  259666 host.go:66] Checking if "addons-712341" exists ...
	I1209 01:56:39.844573  259666 host.go:66] Checking if "addons-712341" exists ...
	I1209 01:56:39.844582  259666 host.go:66] Checking if "addons-712341" exists ...
	I1209 01:56:39.844596  259666 config.go:182] Loaded profile config "addons-712341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:56:39.844657  259666 addons.go:70] Setting default-storageclass=true in profile "addons-712341"
	I1209 01:56:39.844689  259666 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-712341"
	I1209 01:56:39.844812  259666 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-712341"
	I1209 01:56:39.844863  259666 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-712341"
	I1209 01:56:39.844892  259666 host.go:66] Checking if "addons-712341" exists ...
	I1209 01:56:39.844962  259666 addons.go:70] Setting ingress=true in profile "addons-712341"
	I1209 01:56:39.844994  259666 addons.go:239] Setting addon ingress=true in "addons-712341"
	I1209 01:56:39.845044  259666 host.go:66] Checking if "addons-712341" exists ...
	I1209 01:56:39.845723  259666 addons.go:70] Setting registry=true in profile "addons-712341"
	I1209 01:56:39.845754  259666 addons.go:239] Setting addon registry=true in "addons-712341"
	I1209 01:56:39.845783  259666 host.go:66] Checking if "addons-712341" exists ...
	I1209 01:56:39.845851  259666 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-712341"
	I1209 01:56:39.844532  259666 host.go:66] Checking if "addons-712341" exists ...
	I1209 01:56:39.845881  259666 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-712341"
	I1209 01:56:39.845908  259666 host.go:66] Checking if "addons-712341" exists ...
	I1209 01:56:39.845993  259666 addons.go:70] Setting gcp-auth=true in profile "addons-712341"
	I1209 01:56:39.846014  259666 mustload.go:66] Loading cluster: addons-712341
	I1209 01:56:39.846186  259666 config.go:182] Loaded profile config "addons-712341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 01:56:39.846244  259666 addons.go:70] Setting ingress-dns=true in profile "addons-712341"
	I1209 01:56:39.846269  259666 addons.go:239] Setting addon ingress-dns=true in "addons-712341"
	I1209 01:56:39.846308  259666 host.go:66] Checking if "addons-712341" exists ...
	I1209 01:56:39.846470  259666 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-712341"
	I1209 01:56:39.846654  259666 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-712341"
	I1209 01:56:39.846501  259666 addons.go:70] Setting metrics-server=true in profile "addons-712341"
	I1209 01:56:39.846867  259666 addons.go:239] Setting addon metrics-server=true in "addons-712341"
	I1209 01:56:39.846898  259666 host.go:66] Checking if "addons-712341" exists ...
	I1209 01:56:39.846522  259666 addons.go:70] Setting cloud-spanner=true in profile "addons-712341"
	I1209 01:56:39.847153  259666 out.go:179] * Verifying Kubernetes components...
	I1209 01:56:39.847166  259666 addons.go:239] Setting addon cloud-spanner=true in "addons-712341"
	I1209 01:56:39.847249  259666 host.go:66] Checking if "addons-712341" exists ...
	I1209 01:56:39.844487  259666 addons.go:239] Setting addon yakd=true in "addons-712341"
	I1209 01:56:39.847501  259666 host.go:66] Checking if "addons-712341" exists ...
	I1209 01:56:39.846535  259666 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-712341"
	I1209 01:56:39.847893  259666 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-712341"
	I1209 01:56:39.847936  259666 host.go:66] Checking if "addons-712341" exists ...
	I1209 01:56:39.849485  259666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 01:56:39.852876  259666 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1209 01:56:39.852978  259666 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1209 01:56:39.852998  259666 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1209 01:56:39.853531  259666 addons.go:239] Setting addon default-storageclass=true in "addons-712341"
	I1209 01:56:39.853729  259666 host.go:66] Checking if "addons-712341" exists ...
	I1209 01:56:39.855154  259666 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1209 01:56:39.855236  259666 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1209 01:56:39.855625  259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1209 01:56:39.855240  259666 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1209 01:56:39.855249  259666 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1209 01:56:39.855771  259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	W1209 01:56:39.855296  259666 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1209 01:56:39.856094  259666 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1209 01:56:39.856111  259666 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1209 01:56:39.856271  259666 host.go:66] Checking if "addons-712341" exists ...
	I1209 01:56:39.857017  259666 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 01:56:39.857043  259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1209 01:56:39.857810  259666 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1209 01:56:39.857947  259666 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 01:56:39.858254  259666 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-712341"
	I1209 01:56:39.858316  259666 host.go:66] Checking if "addons-712341" exists ...
	I1209 01:56:39.858730  259666 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1209 01:56:39.859802  259666 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1209 01:56:39.859802  259666 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1209 01:56:39.859811  259666 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1209 01:56:39.859948  259666 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 01:56:39.860400  259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 01:56:39.860771  259666 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 01:56:39.860791  259666 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 01:56:39.860962  259666 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1209 01:56:39.860972  259666 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1209 01:56:39.860985  259666 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1209 01:56:39.861016  259666 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 01:56:39.861629  259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1209 01:56:39.861945  259666 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 01:56:39.861958  259666 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 01:56:39.861970  259666 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 01:56:39.861973  259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1209 01:56:39.862781  259666 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1209 01:56:39.862809  259666 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1209 01:56:39.862781  259666 out.go:179]   - Using image docker.io/registry:3.0.0
	I1209 01:56:39.862934  259666 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1209 01:56:39.863324  259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1209 01:56:39.863618  259666 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1209 01:56:39.865047  259666 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1209 01:56:39.865268  259666 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1209 01:56:39.865488  259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1209 01:56:39.865996  259666 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1209 01:56:39.866116  259666 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 01:56:39.866404  259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1209 01:56:39.866118  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.867682  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.867612  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.868521  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.868801  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:39.868880  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.868882  259666 out.go:179]   - Using image docker.io/busybox:stable
	I1209 01:56:39.868957  259666 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1209 01:56:39.869663  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:39.869700  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.869962  259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
	I1209 01:56:39.870198  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:39.870249  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.870737  259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
	I1209 01:56:39.871182  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:39.871224  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.871364  259666 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 01:56:39.871392  259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1209 01:56:39.871447  259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
	I1209 01:56:39.872347  259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
	I1209 01:56:39.872791  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.873759  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.874071  259666 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1209 01:56:39.875284  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:39.875336  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.875530  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:39.875577  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.875989  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.876193  259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
	I1209 01:56:39.876235  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.876604  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.876636  259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
	I1209 01:56:39.877077  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.877470  259666 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1209 01:56:39.877879  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:39.877955  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:39.877994  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.878213  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:39.878250  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.878319  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.878453  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:39.878485  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.878463  259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
	I1209 01:56:39.878498  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.878723  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.878853  259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
	I1209 01:56:39.879250  259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
	I1209 01:56:39.879265  259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
	I1209 01:56:39.880121  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:39.880158  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.880254  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:39.880282  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.880289  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.880515  259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
	I1209 01:56:39.880731  259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
	I1209 01:56:39.880877  259666 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1209 01:56:39.881108  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:39.881148  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.881410  259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
	I1209 01:56:39.881640  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.882064  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:39.882093  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.882252  259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
	I1209 01:56:39.884531  259666 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1209 01:56:39.886086  259666 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1209 01:56:39.887446  259666 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1209 01:56:39.887488  259666 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1209 01:56:39.890971  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.891705  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:39.891743  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:39.891948  259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
	W1209 01:56:40.226476  259666 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44658->192.168.39.107:22: read: connection reset by peer
	I1209 01:56:40.226526  259666 retry.go:31] will retry after 210.313621ms: ssh: handshake failed: read tcp 192.168.39.1:44658->192.168.39.107:22: read: connection reset by peer
	I1209 01:56:40.721348  259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 01:56:40.765512  259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 01:56:40.794919  259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 01:56:40.827003  259666 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 01:56:40.827038  259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1209 01:56:40.873254  259666 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1209 01:56:40.873291  259666 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1209 01:56:40.927412  259666 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1209 01:56:40.927446  259666 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1209 01:56:40.952921  259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 01:56:40.982112  259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1209 01:56:40.997398  259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 01:56:41.011935  259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 01:56:41.022801  259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 01:56:41.028305  259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1209 01:56:41.043557  259666 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.199195114s)
	I1209 01:56:41.043654  259666 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.194135982s)
	I1209 01:56:41.043696  259666 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1209 01:56:41.043716  259666 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1209 01:56:41.043761  259666 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 01:56:41.043850  259666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 01:56:41.371264  259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1209 01:56:41.540489  259666 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1209 01:56:41.540531  259666 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1209 01:56:41.564199  259666 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1209 01:56:41.564232  259666 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1209 01:56:41.591615  259666 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 01:56:41.591656  259666 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 01:56:41.621118  259666 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1209 01:56:41.621152  259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1209 01:56:41.762637  259666 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1209 01:56:41.762693  259666 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1209 01:56:42.300348  259666 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1209 01:56:42.300384  259666 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1209 01:56:42.321836  259666 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1209 01:56:42.321868  259666 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1209 01:56:42.329733  259666 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 01:56:42.329782  259666 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 01:56:42.355572  259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1209 01:56:42.526693  259666 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1209 01:56:42.526731  259666 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1209 01:56:42.710948  259666 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1209 01:56:42.710982  259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1209 01:56:42.738385  259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 01:56:42.764950  259666 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1209 01:56:42.764985  259666 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1209 01:56:43.076339  259666 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1209 01:56:43.076372  259666 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1209 01:56:43.166119  259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1209 01:56:43.210899  259666 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 01:56:43.210947  259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1209 01:56:43.505346  259666 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1209 01:56:43.505377  259666 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1209 01:56:43.579477  259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.858079572s)
	I1209 01:56:43.635789  259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 01:56:44.136928  259666 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1209 01:56:44.136959  259666 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1209 01:56:44.383525  259666 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1209 01:56:44.383559  259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1209 01:56:44.804448  259666 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1209 01:56:44.804485  259666 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1209 01:56:45.300454  259666 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1209 01:56:45.300486  259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1209 01:56:46.051879  259666 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1209 01:56:46.051930  259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1209 01:56:46.180692  259666 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 01:56:46.180734  259666 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1209 01:56:46.891289  259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 01:56:47.218859  259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.453285533s)
	I1209 01:56:47.218963  259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.423990702s)
	I1209 01:56:47.425891  259666 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1209 01:56:47.429205  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:47.429800  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:47.429852  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:47.430257  259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
	I1209 01:56:48.132124  259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.179158567s)
	I1209 01:56:48.132249  259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.134819653s)
	I1209 01:56:48.132234  259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.150058507s)
	I1209 01:56:48.132324  259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.120363165s)
	I1209 01:56:48.296248  259666 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1209 01:56:48.587835  259666 addons.go:239] Setting addon gcp-auth=true in "addons-712341"
	I1209 01:56:48.587919  259666 host.go:66] Checking if "addons-712341" exists ...
	I1209 01:56:48.590030  259666 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1209 01:56:48.592581  259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:48.593058  259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
	I1209 01:56:48.593083  259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
	I1209 01:56:48.593259  259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
	I1209 01:56:50.113973  259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.09110869s)
	I1209 01:56:50.114024  259666 addons.go:495] Verifying addon ingress=true in "addons-712341"
	I1209 01:56:50.114050  259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (9.085706809s)
	I1209 01:56:50.114159  259666 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.070372525s)
	I1209 01:56:50.114117  259666 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.070229331s)
	I1209 01:56:50.114214  259666 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1209 01:56:50.114252  259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (8.742939387s)
	I1209 01:56:50.114372  259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.75875164s)
	I1209 01:56:50.114406  259666 addons.go:495] Verifying addon registry=true in "addons-712341"
	I1209 01:56:50.114509  259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.948343881s)
	I1209 01:56:50.114456  259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.376026923s)
	I1209 01:56:50.115349  259666 addons.go:495] Verifying addon metrics-server=true in "addons-712341"
	I1209 01:56:50.115077  259666 node_ready.go:35] waiting up to 6m0s for node "addons-712341" to be "Ready" ...
	I1209 01:56:50.115858  259666 out.go:179] * Verifying registry addon...
	I1209 01:56:50.115862  259666 out.go:179] * Verifying ingress addon...
	I1209 01:56:50.116778  259666 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-712341 service yakd-dashboard -n yakd-dashboard
	
	I1209 01:56:50.118596  259666 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1209 01:56:50.118737  259666 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1209 01:56:50.160627  259666 node_ready.go:49] node "addons-712341" is "Ready"
	I1209 01:56:50.160668  259666 node_ready.go:38] duration metric: took 45.298865ms for node "addons-712341" to be "Ready" ...
	I1209 01:56:50.160692  259666 api_server.go:52] waiting for apiserver process to appear ...
	I1209 01:56:50.160759  259666 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 01:56:50.184979  259666 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 01:56:50.185013  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:50.185073  259666 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1209 01:56:50.185094  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:50.613270  259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.977423664s)
	W1209 01:56:50.613342  259666 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 01:56:50.613379  259666 retry.go:31] will retry after 331.842733ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 01:56:50.629391  259666 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-712341" context rescaled to 1 replicas
	I1209 01:56:50.766495  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:50.770748  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:50.945417  259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 01:56:51.140135  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:51.141665  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:51.633657  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:51.641713  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:52.068836  259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.177445878s)
	I1209 01:56:52.068871  259666 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.478812094s)
	I1209 01:56:52.068888  259666 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-712341"
	I1209 01:56:52.068961  259666 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.908175899s)
	I1209 01:56:52.069086  259666 api_server.go:72] duration metric: took 12.224663964s to wait for apiserver process to appear ...
	I1209 01:56:52.069104  259666 api_server.go:88] waiting for apiserver healthz status ...
	I1209 01:56:52.069128  259666 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I1209 01:56:52.070627  259666 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1209 01:56:52.070629  259666 out.go:179] * Verifying csi-hostpath-driver addon...
	I1209 01:56:52.072619  259666 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1209 01:56:52.073428  259666 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1209 01:56:52.073533  259666 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1209 01:56:52.073555  259666 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1209 01:56:52.077068  259666 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I1209 01:56:52.085787  259666 api_server.go:141] control plane version: v1.34.2
	I1209 01:56:52.085847  259666 api_server.go:131] duration metric: took 16.729057ms to wait for apiserver health ...
	I1209 01:56:52.085863  259666 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 01:56:52.099490  259666 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 01:56:52.099517  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:52.103755  259666 system_pods.go:59] 20 kube-system pods found
	I1209 01:56:52.103804  259666 system_pods.go:61] "amd-gpu-device-plugin-v9zls" [be0f5b68-1efc-4f03-b19d-adfa034a57b3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1209 01:56:52.103815  259666 system_pods.go:61] "coredns-66bc5c9577-shdck" [d0f44c72-0768-4808-a1c0-509d3e328c38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 01:56:52.103838  259666 system_pods.go:61] "coredns-66bc5c9577-v5f2r" [524b3b94-0cfa-457a-aa87-bbd516f29864] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 01:56:52.103848  259666 system_pods.go:61] "csi-hostpath-attacher-0" [056c3e94-e378-4434-95ae-158383485f4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 01:56:52.103856  259666 system_pods.go:61] "csi-hostpath-resizer-0" [3267d67d-4d7e-4816-841d-91e30d091abe] Pending
	I1209 01:56:52.103865  259666 system_pods.go:61] "csi-hostpathplugin-kdsd6" [8b4341b2-33bd-408a-8472-4546030ef449] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 01:56:52.103871  259666 system_pods.go:61] "etcd-addons-712341" [dc56fb0e-3d25-4ecf-b7ac-d8f252ba1e90] Running
	I1209 01:56:52.103896  259666 system_pods.go:61] "kube-apiserver-addons-712341" [c5304e82-26dd-44bb-81f4-3e1fa4178b40] Running
	I1209 01:56:52.103902  259666 system_pods.go:61] "kube-controller-manager-addons-712341" [fa245aed-3fab-4f15-bd4e-0bd87b0850a9] Running
	I1209 01:56:52.103911  259666 system_pods.go:61] "kube-ingress-dns-minikube" [756114fc-819b-48c7-9b13-f0fb6eb36384] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1209 01:56:52.103918  259666 system_pods.go:61] "kube-proxy-vk4qc" [8b43011e-4293-431e-838d-88f45ea2837d] Running
	I1209 01:56:52.103924  259666 system_pods.go:61] "kube-scheduler-addons-712341" [6c9c9db7-76fe-46f9-ab73-306c1f5cc488] Running
	I1209 01:56:52.103931  259666 system_pods.go:61] "metrics-server-85b7d694d7-kkqs4" [84337421-94b2-47bc-a027-73f7b42030a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 01:56:52.103940  259666 system_pods.go:61] "nvidia-device-plugin-daemonset-44sbc" [046c49b7-0e2c-4126-bc6a-ba9c44dcdfeb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1209 01:56:52.103952  259666 system_pods.go:61] "registry-6b586f9694-kbblm" [2debdb6b-823b-4310-974e-3cf03104d154] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 01:56:52.103962  259666 system_pods.go:61] "registry-creds-764b6fb674-4th89" [757a10af-9961-47d1-a4fa-5480787fe593] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1209 01:56:52.103970  259666 system_pods.go:61] "registry-proxy-w94f7" [66b090e3-ac51-4b13-a537-2f07c2a6961d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 01:56:52.103984  259666 system_pods.go:61] "snapshot-controller-7d9fbc56b8-6mgx6" [d34b4c88-e09a-4259-96d5-43b960cb1543] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:52.103993  259666 system_pods.go:61] "snapshot-controller-7d9fbc56b8-78tv4" [cfe91410-68d2-43fb-8b8f-a73756bfdf68] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:52.104001  259666 system_pods.go:61] "storage-provisioner" [7f5f0da7-b773-470f-999a-a04b68b1cfbc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 01:56:52.104013  259666 system_pods.go:74] duration metric: took 18.141772ms to wait for pod list to return data ...
	I1209 01:56:52.104028  259666 default_sa.go:34] waiting for default service account to be created ...
	I1209 01:56:52.135500  259666 default_sa.go:45] found service account: "default"
	I1209 01:56:52.135537  259666 default_sa.go:55] duration metric: took 31.496154ms for default service account to be created ...
	I1209 01:56:52.135552  259666 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 01:56:52.145010  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:52.196342  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:52.201428  259666 system_pods.go:86] 20 kube-system pods found
	I1209 01:56:52.201472  259666 system_pods.go:89] "amd-gpu-device-plugin-v9zls" [be0f5b68-1efc-4f03-b19d-adfa034a57b3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1209 01:56:52.201483  259666 system_pods.go:89] "coredns-66bc5c9577-shdck" [d0f44c72-0768-4808-a1c0-509d3e328c38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 01:56:52.201495  259666 system_pods.go:89] "coredns-66bc5c9577-v5f2r" [524b3b94-0cfa-457a-aa87-bbd516f29864] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 01:56:52.201504  259666 system_pods.go:89] "csi-hostpath-attacher-0" [056c3e94-e378-4434-95ae-158383485f4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 01:56:52.201514  259666 system_pods.go:89] "csi-hostpath-resizer-0" [3267d67d-4d7e-4816-841d-91e30d091abe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 01:56:52.201522  259666 system_pods.go:89] "csi-hostpathplugin-kdsd6" [8b4341b2-33bd-408a-8472-4546030ef449] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 01:56:52.201527  259666 system_pods.go:89] "etcd-addons-712341" [dc56fb0e-3d25-4ecf-b7ac-d8f252ba1e90] Running
	I1209 01:56:52.201534  259666 system_pods.go:89] "kube-apiserver-addons-712341" [c5304e82-26dd-44bb-81f4-3e1fa4178b40] Running
	I1209 01:56:52.201542  259666 system_pods.go:89] "kube-controller-manager-addons-712341" [fa245aed-3fab-4f15-bd4e-0bd87b0850a9] Running
	I1209 01:56:52.201547  259666 system_pods.go:89] "kube-ingress-dns-minikube" [756114fc-819b-48c7-9b13-f0fb6eb36384] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1209 01:56:52.201550  259666 system_pods.go:89] "kube-proxy-vk4qc" [8b43011e-4293-431e-838d-88f45ea2837d] Running
	I1209 01:56:52.201554  259666 system_pods.go:89] "kube-scheduler-addons-712341" [6c9c9db7-76fe-46f9-ab73-306c1f5cc488] Running
	I1209 01:56:52.201562  259666 system_pods.go:89] "metrics-server-85b7d694d7-kkqs4" [84337421-94b2-47bc-a027-73f7b42030a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 01:56:52.201570  259666 system_pods.go:89] "nvidia-device-plugin-daemonset-44sbc" [046c49b7-0e2c-4126-bc6a-ba9c44dcdfeb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1209 01:56:52.201577  259666 system_pods.go:89] "registry-6b586f9694-kbblm" [2debdb6b-823b-4310-974e-3cf03104d154] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 01:56:52.201588  259666 system_pods.go:89] "registry-creds-764b6fb674-4th89" [757a10af-9961-47d1-a4fa-5480787fe593] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1209 01:56:52.201600  259666 system_pods.go:89] "registry-proxy-w94f7" [66b090e3-ac51-4b13-a537-2f07c2a6961d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 01:56:52.201609  259666 system_pods.go:89] "snapshot-controller-7d9fbc56b8-6mgx6" [d34b4c88-e09a-4259-96d5-43b960cb1543] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:52.201620  259666 system_pods.go:89] "snapshot-controller-7d9fbc56b8-78tv4" [cfe91410-68d2-43fb-8b8f-a73756bfdf68] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 01:56:52.201626  259666 system_pods.go:89] "storage-provisioner" [7f5f0da7-b773-470f-999a-a04b68b1cfbc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 01:56:52.201638  259666 system_pods.go:126] duration metric: took 66.07735ms to wait for k8s-apps to be running ...
	I1209 01:56:52.201649  259666 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 01:56:52.201711  259666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 01:56:52.244528  259666 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1209 01:56:52.244559  259666 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1209 01:56:52.442772  259666 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 01:56:52.442796  259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1209 01:56:52.528225  259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 01:56:52.584081  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:52.625928  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:52.627492  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:53.088457  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:53.124364  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:53.124382  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:53.462849  259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.51735524s)
	I1209 01:56:53.462988  259666 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.261245572s)
	I1209 01:56:53.463025  259666 system_svc.go:56] duration metric: took 1.261371822s WaitForService to wait for kubelet
	I1209 01:56:53.463035  259666 kubeadm.go:587] duration metric: took 13.618618651s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 01:56:53.463054  259666 node_conditions.go:102] verifying NodePressure condition ...
	I1209 01:56:53.469930  259666 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 01:56:53.469971  259666 node_conditions.go:123] node cpu capacity is 2
	I1209 01:56:53.469995  259666 node_conditions.go:105] duration metric: took 6.936425ms to run NodePressure ...
	I1209 01:56:53.470016  259666 start.go:242] waiting for startup goroutines ...
	I1209 01:56:53.579625  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:53.626562  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:53.627154  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:54.124663  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:54.164045  259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.635775951s)
	I1209 01:56:54.165086  259666 addons.go:495] Verifying addon gcp-auth=true in "addons-712341"
	I1209 01:56:54.167332  259666 out.go:179] * Verifying gcp-auth addon...
	I1209 01:56:54.169256  259666 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1209 01:56:54.228125  259666 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1209 01:56:54.228165  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:54.228312  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:54.228415  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:54.583764  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:54.626180  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:54.629673  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:54.674218  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:55.079318  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:55.122497  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:55.126937  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:55.176208  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:55.585843  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:55.624730  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:55.627667  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:55.675040  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:56.085690  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:56.182494  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:56.182785  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:56.184878  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:56.585510  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:56.625752  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:56.627653  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:56.674152  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:57.082923  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:57.125784  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:57.126000  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:57.174918  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:57.582283  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:57.624039  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:57.624899  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:57.673941  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:58.080773  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:58.125288  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:58.126514  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:58.174577  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:58.578448  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:58.622695  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:58.623048  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:58.673298  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:59.077568  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:59.124412  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:59.126387  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:59.173095  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:56:59.578317  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:56:59.623742  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:56:59.623804  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:56:59.673610  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:00.080249  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:00.124743  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:00.125239  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:00.174588  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:00.581630  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:00.624448  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:00.624709  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:00.675753  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:01.085094  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:01.183312  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:01.184367  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:01.185752  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:01.580392  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:01.625258  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:01.626660  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:01.674266  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:02.077570  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:02.127591  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:02.129194  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:02.175322  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:02.578455  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:02.624665  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:02.627647  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:02.674309  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:03.317591  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:03.319517  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:03.323970  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:03.324244  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:03.579001  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:03.622700  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:03.622776  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:03.676569  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:04.078200  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:04.126401  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:04.126501  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:04.173419  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:04.583559  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:04.625547  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:04.684957  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:04.685418  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:05.081509  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:05.125315  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:05.127322  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:05.173833  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:05.580854  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:05.623663  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:05.625082  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:05.676468  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:06.079745  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:06.126007  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:06.126426  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:06.178248  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:06.584121  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:06.634742  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:06.649068  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:06.681007  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:07.084361  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:07.128353  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:07.128412  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:07.173323  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:07.584092  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:07.633654  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:07.633806  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:07.678535  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:08.085337  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:08.126575  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:08.132547  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:08.175852  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:08.589876  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:08.626639  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:08.626787  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:08.672952  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:09.143091  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:09.176941  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:09.176959  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:09.185991  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:09.590625  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:09.637991  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:09.638062  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:09.677672  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:10.087051  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:10.130759  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:10.131618  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:10.257927  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:10.578874  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:10.626288  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:10.633335  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:10.673402  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:11.079652  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:11.123742  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:11.124607  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:11.173135  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:11.896064  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:11.899711  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:11.900088  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:11.900562  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:12.077782  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:12.123717  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:12.125064  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:12.175222  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:12.581620  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:12.624466  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:12.624538  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:12.673023  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:13.084428  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:13.129773  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:13.129956  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:13.177768  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:13.580556  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:13.623621  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:13.633417  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:13.675291  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:14.173130  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:14.173229  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:14.173248  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:14.176502  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:14.579046  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:14.625230  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:14.626321  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:14.674887  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:15.080600  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:15.124969  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:15.125480  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:15.174341  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:15.584569  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:15.624229  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:15.624229  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:15.675367  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:16.079166  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:16.125215  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:16.126459  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:16.173694  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:16.581782  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:16.633992  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:16.634090  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:16.678706  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:17.077563  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:17.124238  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:17.124531  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:17.175249  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:17.580164  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:17.622263  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:17.627949  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:17.674354  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:18.078350  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:18.126218  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:18.126269  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:18.173354  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:18.582504  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:18.622382  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:18.623178  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:18.673881  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:19.079452  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:19.123375  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:19.123564  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:19.172784  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:19.578676  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:19.626702  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:19.626798  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:19.674988  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:20.080518  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:20.129199  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:20.131023  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:20.173853  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:20.581148  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:20.623180  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:20.625878  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:20.674561  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:21.078452  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:21.123608  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:21.123904  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:21.174227  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:21.579635  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:21.625195  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:21.627504  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:21.678491  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:22.077901  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:22.121957  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:22.124367  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:22.182359  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:22.583140  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:22.622887  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:22.625802  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:22.673757  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:23.078800  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:23.122393  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:23.123611  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:23.172676  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:23.577407  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:23.625637  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:23.626278  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:23.673247  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:24.078709  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:24.127634  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:24.127632  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:24.174434  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:24.629344  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:24.630105  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:24.630371  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:24.676222  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:25.082745  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:25.126230  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:25.126247  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:25.183127  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:25.580564  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:25.625047  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 01:57:25.626145  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:25.678386  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:26.079566  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:26.129986  259666 kapi.go:107] duration metric: took 36.011243554s to wait for kubernetes.io/minikube-addons=registry ...
	I1209 01:57:26.134325  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:26.180286  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:26.578353  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:26.623355  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:26.678898  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:27.077651  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:27.123168  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:27.174032  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:27.578139  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:27.623299  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:27.673998  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:28.082115  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:28.128343  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:28.176091  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:28.583645  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:28.623219  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:28.683857  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:29.083973  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:29.126706  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:29.176159  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:29.582700  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:29.626043  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:29.674919  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:30.082193  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:30.126309  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:30.176544  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:30.579759  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:30.623749  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:30.674952  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:31.080543  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:31.123640  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:31.175213  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:31.578674  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:31.624425  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:31.673956  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:32.138550  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:32.140330  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:32.235545  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:32.580793  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:32.623258  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:32.681985  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:33.081798  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:33.128943  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:33.175314  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:33.579214  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:33.622333  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:33.675578  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:34.080089  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:34.122776  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:34.173142  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:34.581034  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:34.622760  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:34.673392  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:35.086300  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:35.181801  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:35.181992  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:35.580526  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:35.622457  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:35.676610  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:36.077417  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:36.125160  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:36.174409  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:36.579989  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:36.623532  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:36.675569  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:37.084603  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:37.125366  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:37.174307  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:37.578095  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:37.628535  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:37.676099  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:38.079122  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:38.123573  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:38.172649  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:38.580226  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:38.625574  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:38.674436  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:39.085026  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:39.128316  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:39.173217  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:39.577991  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:39.622752  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:39.673223  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:40.094578  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:40.207736  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:40.210228  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:40.582598  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:40.628218  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:40.676704  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:41.083672  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:41.127406  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:41.175103  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:41.581648  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:41.624383  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:41.679916  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:42.079057  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:42.124619  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:42.176517  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:42.579063  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:42.626238  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:42.679172  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:43.078200  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:43.122633  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:43.176914  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:43.579646  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:43.628515  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:43.729189  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:44.081665  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:44.124223  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:44.175118  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:44.580714  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:44.623029  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:44.673941  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:45.084222  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:45.125938  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:45.173752  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:45.577464  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:45.623683  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:45.676164  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:46.078254  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:46.122658  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:46.172598  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:46.577648  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:46.625502  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:46.676181  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:47.082069  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:47.183754  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:47.184962  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:47.580085  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 01:57:47.626889  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:47.681007  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:48.079598  259666 kapi.go:107] duration metric: took 56.006171821s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1209 01:57:48.123338  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:48.173677  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:48.623056  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:48.673218  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:49.123987  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:49.175564  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:49.622434  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:49.673473  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:50.123862  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:50.173136  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:50.623865  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:50.673933  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:51.124158  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:51.173322  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:51.624657  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:51.673452  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:52.123217  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:52.174668  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:52.622579  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:52.673319  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:53.123321  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:53.173660  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:53.622438  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:53.672675  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:54.125995  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:54.173526  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:54.625919  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:54.674525  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:55.124815  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:55.173817  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:55.624454  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:55.674130  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:56.127959  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:56.173127  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:56.625140  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:56.675445  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:57.125125  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:57.175856  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:57.623804  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:57.672563  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:58.123028  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:58.173345  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:58.625610  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:58.673361  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:59.129781  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:59.175313  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:57:59.627394  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:57:59.676246  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:58:00.124413  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:58:00.176034  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:58:00.623496  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:58:00.672914  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:58:01.124888  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:58:01.174951  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:58:01.623445  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:58:01.673413  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:58:02.123101  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:58:02.173216  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:58:02.623570  259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 01:58:02.673217  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:58:03.123634  259666 kapi.go:107] duration metric: took 1m13.005030885s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1209 01:58:03.172724  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:58:03.735701  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:58:04.177065  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:58:04.673910  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:58:05.174468  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:58:05.673784  259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 01:58:06.174417  259666 kapi.go:107] duration metric: took 1m12.00515853s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1209 01:58:06.176402  259666 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-712341 cluster.
	I1209 01:58:06.177757  259666 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1209 01:58:06.179167  259666 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1209 01:58:06.181268  259666 out.go:179] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, default-storageclass, nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner-rancher, registry-creds, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1209 01:58:06.182601  259666 addons.go:530] duration metric: took 1m26.338266919s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner default-storageclass nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner-rancher registry-creds inspektor-gadget metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1209 01:58:06.182655  259666 start.go:247] waiting for cluster config update ...
	I1209 01:58:06.182683  259666 start.go:256] writing updated cluster config ...
	I1209 01:58:06.182994  259666 ssh_runner.go:195] Run: rm -f paused
	I1209 01:58:06.191625  259666 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 01:58:06.274995  259666 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-shdck" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:58:06.288662  259666 pod_ready.go:94] pod "coredns-66bc5c9577-shdck" is "Ready"
	I1209 01:58:06.288705  259666 pod_ready.go:86] duration metric: took 13.669679ms for pod "coredns-66bc5c9577-shdck" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:58:06.292456  259666 pod_ready.go:83] waiting for pod "etcd-addons-712341" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:58:06.298582  259666 pod_ready.go:94] pod "etcd-addons-712341" is "Ready"
	I1209 01:58:06.298627  259666 pod_ready.go:86] duration metric: took 6.137664ms for pod "etcd-addons-712341" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:58:06.301475  259666 pod_ready.go:83] waiting for pod "kube-apiserver-addons-712341" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:58:06.307937  259666 pod_ready.go:94] pod "kube-apiserver-addons-712341" is "Ready"
	I1209 01:58:06.307975  259666 pod_ready.go:86] duration metric: took 6.464095ms for pod "kube-apiserver-addons-712341" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:58:06.310976  259666 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-712341" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:58:06.596027  259666 pod_ready.go:94] pod "kube-controller-manager-addons-712341" is "Ready"
	I1209 01:58:06.596067  259666 pod_ready.go:86] duration metric: took 285.04526ms for pod "kube-controller-manager-addons-712341" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:58:06.797515  259666 pod_ready.go:83] waiting for pod "kube-proxy-vk4qc" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:58:07.261429  259666 pod_ready.go:94] pod "kube-proxy-vk4qc" is "Ready"
	I1209 01:58:07.261457  259666 pod_ready.go:86] duration metric: took 463.913599ms for pod "kube-proxy-vk4qc" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:58:07.396631  259666 pod_ready.go:83] waiting for pod "kube-scheduler-addons-712341" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:58:07.796322  259666 pod_ready.go:94] pod "kube-scheduler-addons-712341" is "Ready"
	I1209 01:58:07.796353  259666 pod_ready.go:86] duration metric: took 399.694019ms for pod "kube-scheduler-addons-712341" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 01:58:07.796368  259666 pod_ready.go:40] duration metric: took 1.604694946s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 01:58:07.843921  259666 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1209 01:58:07.845848  259666 out.go:179] * Done! kubectl is now configured to use "addons-712341" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.823487832Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a045cc05-aac7-4565-9364-4ade4ed238f9 name=/runtime.v1.RuntimeService/Version
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.825144104Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f84c201c-14bd-4785-a195-c7c58f1d305b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.826668321Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765245669826571470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545751,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f84c201c-14bd-4785-a195-c7c58f1d305b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.828396361Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9acedacc-b0bb-49d1-8c57-6508017c2953 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.828472508Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9acedacc-b0bb-49d1-8c57-6508017c2953 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.828954204Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec15d060ae394906d39ef35df621d3eaa17eff94affb7d575b4004b993bb8387,PodSandboxId:62f0b8ae019de502b93b08128bc55fcf2b19162ed6caf21dd6c320accbc9cbcf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765245526447302776,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 56885611-8b41-4e56-b6f9-8cc75bfdbfd9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f587c28a8ce3c46b82eee4271e328e96588b54af9dcbc51395bc49b1c3cf5cb5,PodSandboxId:3c0b1ee3ed1034dfec65a6b682d4dc347ff9aea35f69421105196a0cda41475b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765245491470685947,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de8fd268-6e5a-4d89-89ef-8d352023a017,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca93564ea7dd662f18e42eeadee30ffbc06cd7c45ccdbea985fb8f36a4429a3d,PodSandboxId:f69cb0b70c06ea6d570b69b535d56ac56b7960c5302d4bddeefb01d520709a8f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765245481871542524,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-swb6n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1b839a85-e21a-4700-bdd5-73a4eb455656,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4318789e13291a608e2e9a415455ab5f4461ae68099375bf94ff7c7e5d2d5375,PodSandboxId:fe31d2996503040215e0c01f0a810cbd2fe242d024000ad576cc84789df1ae40,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765245448039684284,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d4sv2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 358a6b20-7ecd-43a5-bcd7-0ed30014543e,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3386cbf3ac7e87460fb2a04e7019500054049023b78cc5c926010c9b389697b,PodSandboxId:fc666f12e07f041eca7c227af7f72d42386b4dc46a40c2a77fe7fc1310b500eb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765245446327400051,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7bf82,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 94315895-0cf8-4263-8d0c-d3aa9b6dbe2b,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e2be0f8d767c790242e8b7b87b5c2c63447f49568e123be16a57d2df1139f42,PodSandboxId:73ad4dba12805d0d45c3ab7da1a7c244f5e83888673efc139454028d68f86c10,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765245435332705098,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756114fc-819b-48c7-9b13-f0fb6eb36384,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260555cd1575816836ddb050ffe5036a4263d87790b0362a7a833bdf6d25fdb5,PodSandboxId:10e24757d42b2a67cfec36df263a739da7031be1f40a8e8efc64cd3aa7a56a19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&Ima
geSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765245414418711010,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5f0da7-b773-470f-999a-a04b68b1cfbc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30dcac27e864e8939ded9c048c72e6aaf02e7fb23ca367d6998c8a3451001061,PodSandboxId:e1e470ec0036f562e2d3ba4058327fe7dba3b9556bc0ad8737f9a479e574df4a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Ima
ge:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765245410123959153,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-v9zls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be0f5b68-1efc-4f03-b19d-adfa034a57b3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38631b1bcc4c2e7248750e6d1133052729f2e37827e330e72bf02c4a81d8f68b,PodSandboxId:0fcc9f2967f1dfe6617a59473ed7c4fc75c6c8bf8900d20d29a281cc7287610e,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765245401911110108,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-shdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f44c72-0768-4808-a1c0-509d3e328c38,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6720b9b4382c48c64fdec86c2fd0596e617c82196ba5f4b5489e136a804fc6fb,PodSandboxId:c02b4d25a160be1076b454bcffb215cb8e5dcddd53d5702208a4f51964224f3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765245400953649001,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vk4qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b43011e-4293-431e-838d-88f45ea2837d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a3b82a29bc88ba34fdd0a63cfa749adabfbbce5ee66a7027143a11789da78ba,PodSandboxId:5cd8e6de89e438bec91b41e04acc882cf651e09c170a9d307c836acb3a5106fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765245387963030793,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce3ac49c9daa5dc52e59239b1562bf5a,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":102
57,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ecfa308eea4cdeebe3c9474876bba25ef96e20f8e8cf4305f0bf1a32112ee5b,PodSandboxId:d34cb6f659c30db478556b006b08926efdd4ac502cb7f85e396aa485f9802e5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765245387936682755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 983b6a8f4c7cd5049430c8725659e085,},Annotations:map[string]string{io.kubernetes.container.hash:
e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:685da6ee8ce553eb479d57c5570e5ce09b45f9f091f643861572f0b00fa9f7c4,PodSandboxId:e610390c074e940470ed9c320800e40d3cfcdc6b51497edd31912cb1819914c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765245387876361124,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 7f6ea96060ca8daf2f4fa541fba3771c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc82dcd02980dfe5cfcad067f06da24ccad8715782004643b6379245ab335497,PodSandboxId:cf79b9732b052ad72057f9fe0e7124efccda99cd600e66a4d7107351a6144328,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765245387790404061,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 617af1bb7b72d83eac8d928f752abda3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9acedacc-b0bb-49d1-8c57-6508017c2953 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.861780209Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=acdf9e16-99a6-4a13-bcc3-a9ca80fa3ccf name=/runtime.v1.RuntimeService/Version
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.861883365Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=acdf9e16-99a6-4a13-bcc3-a9ca80fa3ccf name=/runtime.v1.RuntimeService/Version
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.863343110Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79162913-9ec2-4d23-8f94-151473e80395 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.864681586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765245669864654053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545751,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79162913-9ec2-4d23-8f94-151473e80395 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.865716178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16f5baa3-d520-40fe-a770-554e981b3112 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.865773811Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16f5baa3-d520-40fe-a770-554e981b3112 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.866145718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec15d060ae394906d39ef35df621d3eaa17eff94affb7d575b4004b993bb8387,PodSandboxId:62f0b8ae019de502b93b08128bc55fcf2b19162ed6caf21dd6c320accbc9cbcf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765245526447302776,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 56885611-8b41-4e56-b6f9-8cc75bfdbfd9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f587c28a8ce3c46b82eee4271e328e96588b54af9dcbc51395bc49b1c3cf5cb5,PodSandboxId:3c0b1ee3ed1034dfec65a6b682d4dc347ff9aea35f69421105196a0cda41475b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765245491470685947,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de8fd268-6e5a-4d89-89ef-8d352023a017,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca93564ea7dd662f18e42eeadee30ffbc06cd7c45ccdbea985fb8f36a4429a3d,PodSandboxId:f69cb0b70c06ea6d570b69b535d56ac56b7960c5302d4bddeefb01d520709a8f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765245481871542524,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-swb6n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1b839a85-e21a-4700-bdd5-73a4eb455656,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4318789e13291a608e2e9a415455ab5f4461ae68099375bf94ff7c7e5d2d5375,PodSandboxId:fe31d2996503040215e0c01f0a810cbd2fe242d024000ad576cc84789df1ae40,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765245448039684284,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d4sv2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 358a6b20-7ecd-43a5-bcd7-0ed30014543e,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3386cbf3ac7e87460fb2a04e7019500054049023b78cc5c926010c9b389697b,PodSandboxId:fc666f12e07f041eca7c227af7f72d42386b4dc46a40c2a77fe7fc1310b500eb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765245446327400051,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7bf82,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 94315895-0cf8-4263-8d0c-d3aa9b6dbe2b,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e2be0f8d767c790242e8b7b87b5c2c63447f49568e123be16a57d2df1139f42,PodSandboxId:73ad4dba12805d0d45c3ab7da1a7c244f5e83888673efc139454028d68f86c10,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765245435332705098,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756114fc-819b-48c7-9b13-f0fb6eb36384,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260555cd1575816836ddb050ffe5036a4263d87790b0362a7a833bdf6d25fdb5,PodSandboxId:10e24757d42b2a67cfec36df263a739da7031be1f40a8e8efc64cd3aa7a56a19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&Ima
geSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765245414418711010,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5f0da7-b773-470f-999a-a04b68b1cfbc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30dcac27e864e8939ded9c048c72e6aaf02e7fb23ca367d6998c8a3451001061,PodSandboxId:e1e470ec0036f562e2d3ba4058327fe7dba3b9556bc0ad8737f9a479e574df4a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Ima
ge:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765245410123959153,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-v9zls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be0f5b68-1efc-4f03-b19d-adfa034a57b3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38631b1bcc4c2e7248750e6d1133052729f2e37827e330e72bf02c4a81d8f68b,PodSandboxId:0fcc9f2967f1dfe6617a59473ed7c4fc75c6c8bf8900d20d29a281cc7287610e,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765245401911110108,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-shdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f44c72-0768-4808-a1c0-509d3e328c38,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6720b9b4382c48c64fdec86c2fd0596e617c82196ba5f4b5489e136a804fc6fb,PodSandboxId:c02b4d25a160be1076b454bcffb215cb8e5dcddd53d5702208a4f51964224f3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765245400953649001,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vk4qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b43011e-4293-431e-838d-88f45ea2837d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a3b82a29bc88ba34fdd0a63cfa749adabfbbce5ee66a7027143a11789da78ba,PodSandboxId:5cd8e6de89e438bec91b41e04acc882cf651e09c170a9d307c836acb3a5106fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765245387963030793,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce3ac49c9daa5dc52e59239b1562bf5a,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":102
57,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ecfa308eea4cdeebe3c9474876bba25ef96e20f8e8cf4305f0bf1a32112ee5b,PodSandboxId:d34cb6f659c30db478556b006b08926efdd4ac502cb7f85e396aa485f9802e5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765245387936682755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 983b6a8f4c7cd5049430c8725659e085,},Annotations:map[string]string{io.kubernetes.container.hash:
e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:685da6ee8ce553eb479d57c5570e5ce09b45f9f091f643861572f0b00fa9f7c4,PodSandboxId:e610390c074e940470ed9c320800e40d3cfcdc6b51497edd31912cb1819914c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765245387876361124,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 7f6ea96060ca8daf2f4fa541fba3771c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc82dcd02980dfe5cfcad067f06da24ccad8715782004643b6379245ab335497,PodSandboxId:cf79b9732b052ad72057f9fe0e7124efccda99cd600e66a4d7107351a6144328,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765245387790404061,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 617af1bb7b72d83eac8d928f752abda3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16f5baa3-d520-40fe-a770-554e981b3112 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.867720201Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=504bfdf9-7d37-4107-b18c-33f075f50f3f name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.868678055Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:893bfb8bf65bf7d1bcc7d23c10585d6867da8439ed5550dc87e0c249a63b8b91,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-lmwdx,Uid:c662e72a-cc05-4c42-9e4a-0643c57478d7,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765245668909198277,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-lmwdx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c662e72a-cc05-4c42-9e4a-0643c57478d7,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-09T02:01:08.581156960Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:62f0b8ae019de502b93b08128bc55fcf2b19162ed6caf21dd6c320accbc9cbcf,Metadata:&PodSandboxMetadata{Name:nginx,Uid:56885611-8b41-4e56-b6f9-8cc75bfdbfd9,Namespace:default,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1765245520962164705,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 56885611-8b41-4e56-b6f9-8cc75bfdbfd9,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-09T01:58:40.569830689Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3c0b1ee3ed1034dfec65a6b682d4dc347ff9aea35f69421105196a0cda41475b,Metadata:&PodSandboxMetadata{Name:busybox,Uid:de8fd268-6e5a-4d89-89ef-8d352023a017,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765245488805507086,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de8fd268-6e5a-4d89-89ef-8d352023a017,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-09T01:58:08.481491691Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f69cb0b70c06ea6d570b6
9b535d56ac56b7960c5302d4bddeefb01d520709a8f,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-85d4c799dd-swb6n,Uid:1b839a85-e21a-4700-bdd5-73a4eb455656,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765245473962477828,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-swb6n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1b839a85-e21a-4700-bdd5-73a4eb455656,pod-template-hash: 85d4c799dd,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-09T01:56:49.727401401Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:10e24757d42b2a67cfec36df263a739da7031be1f40a8e8efc64cd3aa7a56a19,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7f5f0da7-b773-470f-999a-a04b68b1cfbc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,
CreatedAt:1765245409861266483,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5f0da7-b773-470f-999a-a04b68b1cfbc,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"D
irectory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-09T01:56:47.228337375Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:73ad4dba12805d0d45c3ab7da1a7c244f5e83888673efc139454028d68f86c10,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:756114fc-819b-48c7-9b13-f0fb6eb36384,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765245409842477875,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756114fc-819b-48c7-9b13-f0fb6eb36384,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"container
s\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"hostPort\":53,\"protocol\":\"UDP\"}],\"volumeMounts\":[{\"mountPath\":\"/config\",\"name\":\"minikube-ingress-dns-config-volume\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\",\"volumes\":[{\"configMap\":{\"name\":\"minikube-ingress-dns\"},\"name\":\"minikube-ingress-dns-config-volume\"}]}}\n,kubernetes.io/config.seen: 2025-12-09T01:56:46.978669710Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e1e470ec0036f562e2d3ba4058327fe7dba3b9556bc0ad8737f9a479e574df4a,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-v9zls,Uid:be0f5b68-1efc-4f03-b19d-adfa034a57b3,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1765245403890357077,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-v9zls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be0f5b68-1efc-4f03-b19d-adfa034a57b3,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-09T01:56:43.553163358Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0fcc9f2967f1dfe6617a59473ed7c4fc75c6c8bf8900d20d29a281cc7287610e,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-shdck,Uid:d0f44c72-0768-4808-a1c0-509d3e328c38,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765245400699093153,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-shdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f44c72-0768-4808-a1c0-509d3e328c38,k8s-app: kube-dns,po
d-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-09T01:56:40.355793570Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c02b4d25a160be1076b454bcffb215cb8e5dcddd53d5702208a4f51964224f3c,Metadata:&PodSandboxMetadata{Name:kube-proxy-vk4qc,Uid:8b43011e-4293-431e-838d-88f45ea2837d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765245400567993975,Labels:map[string]string{controller-revision-hash: 66d5f8d6f6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vk4qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b43011e-4293-431e-838d-88f45ea2837d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-09T01:56:40.240345782Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5cd8e6de89e438bec91b41e04acc882cf651e09c170a9d307c836acb3a5106fb,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-712341,Uid:ce3ac49c9daa5
dc52e59239b1562bf5a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765245387623289941,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce3ac49c9daa5dc52e59239b1562bf5a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ce3ac49c9daa5dc52e59239b1562bf5a,kubernetes.io/config.seen: 2025-12-09T01:56:26.713360509Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d34cb6f659c30db478556b006b08926efdd4ac502cb7f85e396aa485f9802e5e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-712341,Uid:983b6a8f4c7cd5049430c8725659e085,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765245387593526375,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-712341,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 983b6a8f4c7cd5049430c8725659e085,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 983b6a8f4c7cd5049430c8725659e085,kubernetes.io/config.seen: 2025-12-09T01:56:26.713361403Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e610390c074e940470ed9c320800e40d3cfcdc6b51497edd31912cb1819914c9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-712341,Uid:7f6ea96060ca8daf2f4fa541fba3771c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765245387479539170,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f6ea96060ca8daf2f4fa541fba3771c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.107:8443,kubernetes.io/config.hash: 7f6ea96060ca8daf2f4fa541fba3771c,kubernetes.io/config.seen: 2025-12-09T01:56:26.7
13359498Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cf79b9732b052ad72057f9fe0e7124efccda99cd600e66a4d7107351a6144328,Metadata:&PodSandboxMetadata{Name:etcd-addons-712341,Uid:617af1bb7b72d83eac8d928f752abda3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765245387477163891,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 617af1bb7b72d83eac8d928f752abda3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.107:2379,kubernetes.io/config.hash: 617af1bb7b72d83eac8d928f752abda3,kubernetes.io/config.seen: 2025-12-09T01:56:26.713356729Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=504bfdf9-7d37-4107-b18c-33f075f50f3f name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.870769072Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d35db87-611e-42ab-bbba-a23b50dc9b73 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.870827635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d35db87-611e-42ab-bbba-a23b50dc9b73 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.871146584Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec15d060ae394906d39ef35df621d3eaa17eff94affb7d575b4004b993bb8387,PodSandboxId:62f0b8ae019de502b93b08128bc55fcf2b19162ed6caf21dd6c320accbc9cbcf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765245526447302776,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 56885611-8b41-4e56-b6f9-8cc75bfdbfd9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f587c28a8ce3c46b82eee4271e328e96588b54af9dcbc51395bc49b1c3cf5cb5,PodSandboxId:3c0b1ee3ed1034dfec65a6b682d4dc347ff9aea35f69421105196a0cda41475b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765245491470685947,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de8fd268-6e5a-4d89-89ef-8d352023a017,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca93564ea7dd662f18e42eeadee30ffbc06cd7c45ccdbea985fb8f36a4429a3d,PodSandboxId:f69cb0b70c06ea6d570b69b535d56ac56b7960c5302d4bddeefb01d520709a8f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765245481871542524,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-swb6n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1b839a85-e21a-4700-bdd5-73a4eb455656,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5e2be0f8d767c790242e8b7b87b5c2c63447f49568e123be16a57d2df1139f42,PodSandboxId:73ad4dba12805d0d45c3ab7da1a7c244f5e83888673efc139454028d68f86c10,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a
b53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765245435332705098,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756114fc-819b-48c7-9b13-f0fb6eb36384,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260555cd1575816836ddb050ffe5036a4263d87790b0362a7a833bdf6d25fdb5,PodSandboxId:10e24757d42b2a67cfec36df263a739da7031be1f40a8e8efc64cd3aa7a56a19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a30
2a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765245414418711010,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5f0da7-b773-470f-999a-a04b68b1cfbc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30dcac27e864e8939ded9c048c72e6aaf02e7fb23ca367d6998c8a3451001061,PodSandboxId:e1e470ec0036f562e2d3ba4058327fe7dba3b9556bc0ad8737f9a479e574df4a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166
c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765245410123959153,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-v9zls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be0f5b68-1efc-4f03-b19d-adfa034a57b3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38631b1bcc4c2e7248750e6d1133052729f2e37827e330e72bf02c4a81d8f68b,PodSandboxId:0fcc9f2967f1dfe6617a59473ed7c4fc75c6c8bf8900d20d29a281cc7287610e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e532450
23b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765245401911110108,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-shdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f44c72-0768-4808-a1c0-509d3e328c38,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6720b9b4382c48c64fdec86c2fd0596e617c82196ba5f4b5489e136a804fc6fb,PodSandboxId:c02b4d25a160be1076b454bcffb215cb8e5dcddd53d5702208a4f51964224f3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765245400953649001,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vk4qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b43011e-4293-431e-838d-88f45ea2837d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:4a3b82a29bc88ba34fdd0a63cfa749adabfbbce5ee66a7027143a11789da78ba,PodSandboxId:5cd8e6de89e438bec91b41e04acc882cf651e09c170a9d307c836acb3a5106fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765245387963030793,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce3ac49c9daa5dc52e59239b1562bf5a,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ecfa308eea4cdeebe3c9474876bba25ef96e20f8e8cf4305f0bf1a32112ee5b,PodSandboxId:d34cb6f659c30db478556b006b08926efdd4ac502cb7f85e396aa485f9802e5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765245387936682755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 983b6a8f4c7cd5049430c8725659e085,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:685da6ee8ce553eb479d57c5570e5ce09b45f9f091f643861572f0b00fa9f7c4,PodSandboxId:e610390c074e940470ed9c320800e40d3cfcdc6b51497edd31912cb1819914c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765245387876361124,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f6ea96060ca8daf2f4fa541fba3771c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc82dcd02980dfe5cfcad067f06da24ccad8715782004643b6379245ab335497,PodSandboxId:cf79b9732b052ad72057f9fe0e7124efccda99cd600e66a4d7107351a6144328,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765245387790404061,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 617af1bb7b72d83eac8d928f752abda3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d35db87-611e-42ab-bbba-a23b50dc9b73 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.903479781Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1106c887-11b0-4761-8735-7d1657e1c114 name=/runtime.v1.RuntimeService/Version
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.903631863Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1106c887-11b0-4761-8735-7d1657e1c114 name=/runtime.v1.RuntimeService/Version
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.905032057Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=640edb5d-c4c9-494f-9517-da85091afac9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.906243464Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765245669906212981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545751,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=640edb5d-c4c9-494f-9517-da85091afac9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.907280238Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd3430a8-8132-44d0-9c56-56b81df6303e name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.907400578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd3430a8-8132-44d0-9c56-56b81df6303e name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.907788937Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec15d060ae394906d39ef35df621d3eaa17eff94affb7d575b4004b993bb8387,PodSandboxId:62f0b8ae019de502b93b08128bc55fcf2b19162ed6caf21dd6c320accbc9cbcf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765245526447302776,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 56885611-8b41-4e56-b6f9-8cc75bfdbfd9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f587c28a8ce3c46b82eee4271e328e96588b54af9dcbc51395bc49b1c3cf5cb5,PodSandboxId:3c0b1ee3ed1034dfec65a6b682d4dc347ff9aea35f69421105196a0cda41475b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765245491470685947,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de8fd268-6e5a-4d89-89ef-8d352023a017,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca93564ea7dd662f18e42eeadee30ffbc06cd7c45ccdbea985fb8f36a4429a3d,PodSandboxId:f69cb0b70c06ea6d570b69b535d56ac56b7960c5302d4bddeefb01d520709a8f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765245481871542524,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-swb6n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1b839a85-e21a-4700-bdd5-73a4eb455656,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4318789e13291a608e2e9a415455ab5f4461ae68099375bf94ff7c7e5d2d5375,PodSandboxId:fe31d2996503040215e0c01f0a810cbd2fe242d024000ad576cc84789df1ae40,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765245448039684284,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d4sv2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 358a6b20-7ecd-43a5-bcd7-0ed30014543e,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3386cbf3ac7e87460fb2a04e7019500054049023b78cc5c926010c9b389697b,PodSandboxId:fc666f12e07f041eca7c227af7f72d42386b4dc46a40c2a77fe7fc1310b500eb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765245446327400051,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7bf82,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 94315895-0cf8-4263-8d0c-d3aa9b6dbe2b,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e2be0f8d767c790242e8b7b87b5c2c63447f49568e123be16a57d2df1139f42,PodSandboxId:73ad4dba12805d0d45c3ab7da1a7c244f5e83888673efc139454028d68f86c10,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765245435332705098,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756114fc-819b-48c7-9b13-f0fb6eb36384,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260555cd1575816836ddb050ffe5036a4263d87790b0362a7a833bdf6d25fdb5,PodSandboxId:10e24757d42b2a67cfec36df263a739da7031be1f40a8e8efc64cd3aa7a56a19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&Ima
geSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765245414418711010,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5f0da7-b773-470f-999a-a04b68b1cfbc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30dcac27e864e8939ded9c048c72e6aaf02e7fb23ca367d6998c8a3451001061,PodSandboxId:e1e470ec0036f562e2d3ba4058327fe7dba3b9556bc0ad8737f9a479e574df4a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Ima
ge:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765245410123959153,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-v9zls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be0f5b68-1efc-4f03-b19d-adfa034a57b3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38631b1bcc4c2e7248750e6d1133052729f2e37827e330e72bf02c4a81d8f68b,PodSandboxId:0fcc9f2967f1dfe6617a59473ed7c4fc75c6c8bf8900d20d29a281cc7287610e,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765245401911110108,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-shdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f44c72-0768-4808-a1c0-509d3e328c38,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6720b9b4382c48c64fdec86c2fd0596e617c82196ba5f4b5489e136a804fc6fb,PodSandboxId:c02b4d25a160be1076b454bcffb215cb8e5dcddd53d5702208a4f51964224f3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765245400953649001,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vk4qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b43011e-4293-431e-838d-88f45ea2837d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a3b82a29bc88ba34fdd0a63cfa749adabfbbce5ee66a7027143a11789da78ba,PodSandboxId:5cd8e6de89e438bec91b41e04acc882cf651e09c170a9d307c836acb3a5106fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765245387963030793,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce3ac49c9daa5dc52e59239b1562bf5a,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":102
57,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ecfa308eea4cdeebe3c9474876bba25ef96e20f8e8cf4305f0bf1a32112ee5b,PodSandboxId:d34cb6f659c30db478556b006b08926efdd4ac502cb7f85e396aa485f9802e5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765245387936682755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 983b6a8f4c7cd5049430c8725659e085,},Annotations:map[string]string{io.kubernetes.container.hash:
e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:685da6ee8ce553eb479d57c5570e5ce09b45f9f091f643861572f0b00fa9f7c4,PodSandboxId:e610390c074e940470ed9c320800e40d3cfcdc6b51497edd31912cb1819914c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765245387876361124,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 7f6ea96060ca8daf2f4fa541fba3771c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc82dcd02980dfe5cfcad067f06da24ccad8715782004643b6379245ab335497,PodSandboxId:cf79b9732b052ad72057f9fe0e7124efccda99cd600e66a4d7107351a6144328,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765245387790404061,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 617af1bb7b72d83eac8d928f752abda3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd3430a8-8132-44d0-9c56-56b81df6303e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ec15d060ae394       public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9                           2 minutes ago       Running             nginx                     0                   62f0b8ae019de       nginx                                       default
	f587c28a8ce3c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   3c0b1ee3ed103       busybox                                     default
	ca93564ea7dd6       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad             3 minutes ago       Running             controller                0                   f69cb0b70c06e       ingress-nginx-controller-85d4c799dd-swb6n   ingress-nginx
	4318789e13291       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              patch                     0                   fe31d29965030       ingress-nginx-admission-patch-d4sv2         ingress-nginx
	f3386cbf3ac7e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              create                    0                   fc666f12e07f0       ingress-nginx-admission-create-7bf82        ingress-nginx
	5e2be0f8d767c       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago       Running             minikube-ingress-dns      0                   73ad4dba12805       kube-ingress-dns-minikube                   kube-system
	260555cd15758       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   10e24757d42b2       storage-provisioner                         kube-system
	30dcac27e864e       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   e1e470ec0036f       amd-gpu-device-plugin-v9zls                 kube-system
	38631b1bcc4c2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   0fcc9f2967f1d       coredns-66bc5c9577-shdck                    kube-system
	6720b9b4382c4       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                             4 minutes ago       Running             kube-proxy                0                   c02b4d25a160b       kube-proxy-vk4qc                            kube-system
	4a3b82a29bc88       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                             4 minutes ago       Running             kube-controller-manager   0                   5cd8e6de89e43       kube-controller-manager-addons-712341       kube-system
	7ecfa308eea4c       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                             4 minutes ago       Running             kube-scheduler            0                   d34cb6f659c30       kube-scheduler-addons-712341                kube-system
	685da6ee8ce55       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                             4 minutes ago       Running             kube-apiserver            0                   e610390c074e9       kube-apiserver-addons-712341                kube-system
	cc82dcd02980d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             4 minutes ago       Running             etcd                      0                   cf79b9732b052       etcd-addons-712341                          kube-system
	
	
	==> coredns [38631b1bcc4c2e7248750e6d1133052729f2e37827e330e72bf02c4a81d8f68b] <==
	[INFO] 10.244.0.9:42296 - 55609 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000094114s
	[INFO] 10.244.0.9:42296 - 21902 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000081263s
	[INFO] 10.244.0.9:42296 - 64647 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000069065s
	[INFO] 10.244.0.9:42296 - 17032 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000096282s
	[INFO] 10.244.0.9:42296 - 59813 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000065799s
	[INFO] 10.244.0.9:42296 - 11550 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000116661s
	[INFO] 10.244.0.9:42296 - 2942 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000241862s
	[INFO] 10.244.0.9:36256 - 62408 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000487727s
	[INFO] 10.244.0.9:36256 - 62760 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000114583s
	[INFO] 10.244.0.9:55003 - 12479 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00013171s
	[INFO] 10.244.0.9:55003 - 12780 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000993857s
	[INFO] 10.244.0.9:44776 - 55722 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000161759s
	[INFO] 10.244.0.9:44776 - 55464 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000180221s
	[INFO] 10.244.0.9:58912 - 22746 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00043327s
	[INFO] 10.244.0.9:58912 - 22487 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000652323s
	[INFO] 10.244.0.23:46020 - 3548 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000286907s
	[INFO] 10.244.0.23:45275 - 59525 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00085293s
	[INFO] 10.244.0.23:53151 - 9476 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000170215s
	[INFO] 10.244.0.23:37553 - 36707 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127048s
	[INFO] 10.244.0.23:60979 - 33304 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129896s
	[INFO] 10.244.0.23:41952 - 29892 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095083s
	[INFO] 10.244.0.23:38807 - 23059 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.002495032s
	[INFO] 10.244.0.23:55574 - 25677 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005315665s
	[INFO] 10.244.0.27:51199 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000715236s
	[INFO] 10.244.0.27:59123 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000210023s
	
	
	==> describe nodes <==
	Name:               addons-712341
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-712341
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=addons-712341
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T01_56_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-712341
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 01:56:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-712341
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:01:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 01:59:06 +0000   Tue, 09 Dec 2025 01:56:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 01:59:06 +0000   Tue, 09 Dec 2025 01:56:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 01:59:06 +0000   Tue, 09 Dec 2025 01:56:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 01:59:06 +0000   Tue, 09 Dec 2025 01:56:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    addons-712341
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 870ec28c5b8846bcb90887091429a736
	  System UUID:                870ec28c-5b88-46bc-b908-87091429a736
	  Boot ID:                    d4d81322-16b9-4840-86b9-308fe92e01c6
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  default                     hello-world-app-5d498dc89-lmwdx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-swb6n    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m21s
	  kube-system                 amd-gpu-device-plugin-v9zls                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 coredns-66bc5c9577-shdck                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m30s
	  kube-system                 etcd-addons-712341                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m37s
	  kube-system                 kube-apiserver-addons-712341                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 kube-controller-manager-addons-712341        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-proxy-vk4qc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-scheduler-addons-712341                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m28s                  kube-proxy       
	  Normal  Starting                 4m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m44s (x8 over 4m44s)  kubelet          Node addons-712341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m44s (x8 over 4m44s)  kubelet          Node addons-712341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m44s (x7 over 4m44s)  kubelet          Node addons-712341 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m37s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m36s                  kubelet          Node addons-712341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m36s                  kubelet          Node addons-712341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m36s                  kubelet          Node addons-712341 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m35s                  kubelet          Node addons-712341 status is now: NodeReady
	  Normal  RegisteredNode           4m32s                  node-controller  Node addons-712341 event: Registered Node addons-712341 in Controller
	
	
	==> dmesg <==
	[  +0.036264] kauditd_printk_skb: 230 callbacks suppressed
	[  +0.000044] kauditd_printk_skb: 456 callbacks suppressed
	[Dec 9 01:57] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.814639] kauditd_printk_skb: 32 callbacks suppressed
	[  +7.450317] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.017935] kauditd_printk_skb: 122 callbacks suppressed
	[  +3.043245] kauditd_printk_skb: 75 callbacks suppressed
	[  +5.222488] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.184648] kauditd_printk_skb: 126 callbacks suppressed
	[  +0.000031] kauditd_printk_skb: 20 callbacks suppressed
	[  +0.000051] kauditd_printk_skb: 29 callbacks suppressed
	[Dec 9 01:58] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.685665] kauditd_printk_skb: 47 callbacks suppressed
	[  +9.494229] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.990391] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.973091] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.399180] kauditd_printk_skb: 174 callbacks suppressed
	[  +0.000029] kauditd_printk_skb: 197 callbacks suppressed
	[  +3.500959] kauditd_printk_skb: 106 callbacks suppressed
	[  +0.000044] kauditd_printk_skb: 35 callbacks suppressed
	[Dec 9 01:59] kauditd_printk_skb: 65 callbacks suppressed
	[ +10.678302] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.000308] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.791965] kauditd_printk_skb: 41 callbacks suppressed
	[Dec 9 02:01] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [cc82dcd02980dfe5cfcad067f06da24ccad8715782004643b6379245ab335497] <==
	{"level":"warn","ts":"2025-12-09T01:57:11.885110Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T01:57:11.523196Z","time spent":"361.859922ms","remote":"127.0.0.1:34258","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:962 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-12-09T01:57:11.885162Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T01:57:11.568969Z","time spent":"316.130105ms","remote":"127.0.0.1:34306","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-12-09T01:57:11.885415Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"272.22843ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-12-09T01:57:11.885670Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"220.777985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T01:57:11.885696Z","caller":"traceutil/trace.go:172","msg":"trace[686371842] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:971; }","duration":"220.805354ms","start":"2025-12-09T01:57:11.664885Z","end":"2025-12-09T01:57:11.885691Z","steps":["trace[686371842] 'agreement among raft nodes before linearized reading'  (duration: 220.763473ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T01:57:11.885741Z","caller":"traceutil/trace.go:172","msg":"trace[1386898485] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:971; }","duration":"272.550807ms","start":"2025-12-09T01:57:11.613179Z","end":"2025-12-09T01:57:11.885730Z","steps":["trace[1386898485] 'agreement among raft nodes before linearized reading'  (duration: 272.205022ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T01:57:11.885861Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"272.675111ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T01:57:11.885876Z","caller":"traceutil/trace.go:172","msg":"trace[1439734591] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:971; }","duration":"272.692165ms","start":"2025-12-09T01:57:11.613179Z","end":"2025-12-09T01:57:11.885872Z","steps":["trace[1439734591] 'agreement among raft nodes before linearized reading'  (duration: 272.661957ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T01:57:14.156451Z","caller":"traceutil/trace.go:172","msg":"trace[1588852585] transaction","detail":"{read_only:false; response_revision:974; number_of_response:1; }","duration":"256.622976ms","start":"2025-12-09T01:57:13.899816Z","end":"2025-12-09T01:57:14.156439Z","steps":["trace[1588852585] 'process raft request'  (duration: 256.266228ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T01:57:21.910872Z","caller":"traceutil/trace.go:172","msg":"trace[1193035101] transaction","detail":"{read_only:false; response_revision:996; number_of_response:1; }","duration":"151.454529ms","start":"2025-12-09T01:57:21.759405Z","end":"2025-12-09T01:57:21.910860Z","steps":["trace[1193035101] 'process raft request'  (duration: 151.351921ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T01:57:22.381173Z","caller":"traceutil/trace.go:172","msg":"trace[1501272587] transaction","detail":"{read_only:false; response_revision:997; number_of_response:1; }","duration":"138.426806ms","start":"2025-12-09T01:57:22.242732Z","end":"2025-12-09T01:57:22.381159Z","steps":["trace[1501272587] 'process raft request'  (duration: 138.314209ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T01:57:24.618967Z","caller":"traceutil/trace.go:172","msg":"trace[1682335110] transaction","detail":"{read_only:false; response_revision:1005; number_of_response:1; }","duration":"219.870394ms","start":"2025-12-09T01:57:24.399085Z","end":"2025-12-09T01:57:24.618955Z","steps":["trace[1682335110] 'process raft request'  (duration: 219.380248ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T01:57:30.789092Z","caller":"traceutil/trace.go:172","msg":"trace[979625353] transaction","detail":"{read_only:false; response_revision:1044; number_of_response:1; }","duration":"120.317673ms","start":"2025-12-09T01:57:30.668762Z","end":"2025-12-09T01:57:30.789079Z","steps":["trace[979625353] 'process raft request'  (duration: 120.199337ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T01:57:32.122185Z","caller":"traceutil/trace.go:172","msg":"trace[1451172396] linearizableReadLoop","detail":"{readStateIndex:1069; appliedIndex:1069; }","duration":"113.257584ms","start":"2025-12-09T01:57:32.008911Z","end":"2025-12-09T01:57:32.122169Z","steps":["trace[1451172396] 'read index received'  (duration: 113.253359ms)","trace[1451172396] 'applied index is now lower than readState.Index'  (duration: 3.586µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-09T01:57:32.122344Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.416456ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T01:57:32.122366Z","caller":"traceutil/trace.go:172","msg":"trace[585012504] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices; range_end:; response_count:0; response_revision:1045; }","duration":"113.483429ms","start":"2025-12-09T01:57:32.008878Z","end":"2025-12-09T01:57:32.122361Z","steps":["trace[585012504] 'agreement among raft nodes before linearized reading'  (duration: 113.385292ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T01:57:32.124355Z","caller":"traceutil/trace.go:172","msg":"trace[148790987] transaction","detail":"{read_only:false; response_revision:1046; number_of_response:1; }","duration":"236.432391ms","start":"2025-12-09T01:57:31.887910Z","end":"2025-12-09T01:57:32.124342Z","steps":["trace[148790987] 'process raft request'  (duration: 234.647039ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T01:57:59.017735Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"173.03494ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-12-09T01:57:59.018072Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.840959ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T01:57:59.020012Z","caller":"traceutil/trace.go:172","msg":"trace[1329056508] range","detail":"{range_begin:/registry/flowschemas; range_end:; response_count:0; response_revision:1169; }","duration":"156.575633ms","start":"2025-12-09T01:57:58.863215Z","end":"2025-12-09T01:57:59.019790Z","steps":["trace[1329056508] 'range keys from in-memory index tree'  (duration: 154.784877ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T01:57:59.019528Z","caller":"traceutil/trace.go:172","msg":"trace[1915387367] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1169; }","duration":"174.767028ms","start":"2025-12-09T01:57:58.844672Z","end":"2025-12-09T01:57:59.019439Z","steps":["trace[1915387367] 'range keys from in-memory index tree'  (duration: 173.023488ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T01:58:07.253874Z","caller":"traceutil/trace.go:172","msg":"trace[1884630933] transaction","detail":"{read_only:false; response_revision:1212; number_of_response:1; }","duration":"144.467179ms","start":"2025-12-09T01:58:07.109393Z","end":"2025-12-09T01:58:07.253860Z","steps":["trace[1884630933] 'process raft request'  (duration: 143.645991ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T01:58:31.511333Z","caller":"traceutil/trace.go:172","msg":"trace[1635541547] transaction","detail":"{read_only:false; response_revision:1356; number_of_response:1; }","duration":"104.116387ms","start":"2025-12-09T01:58:31.407195Z","end":"2025-12-09T01:58:31.511311Z","steps":["trace[1635541547] 'process raft request'  (duration: 103.851396ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T01:58:45.419703Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"212.098626ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2025-12-09T01:58:45.419806Z","caller":"traceutil/trace.go:172","msg":"trace[235655878] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1543; }","duration":"212.222586ms","start":"2025-12-09T01:58:45.207572Z","end":"2025-12-09T01:58:45.419794Z","steps":["trace[235655878] 'range keys from in-memory index tree'  (duration: 211.879958ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:01:10 up 5 min,  0 users,  load average: 0.50, 1.40, 0.75
	Linux addons-712341 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [685da6ee8ce553eb479d57c5570e5ce09b45f9f091f643861572f0b00fa9f7c4] <==
	E1209 01:57:09.622530       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.183.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.183.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.183.186:443: connect: connection refused" logger="UnhandledError"
	E1209 01:57:09.630707       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.183.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.183.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.183.186:443: connect: connection refused" logger="UnhandledError"
	I1209 01:57:09.768552       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1209 01:58:17.694300       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:51060: use of closed network connection
	E1209 01:58:17.938866       1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:51092: use of closed network connection
	I1209 01:58:27.234163       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.147.171"}
	I1209 01:58:40.397708       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1209 01:58:40.617188       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.181.85"}
	E1209 01:58:59.797474       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1209 01:59:02.965310       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1209 01:59:10.643863       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1209 01:59:32.281429       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 01:59:32.282261       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 01:59:32.328400       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 01:59:32.328462       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 01:59:32.334356       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 01:59:32.335741       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 01:59:32.356051       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 01:59:32.356108       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 01:59:32.379122       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 01:59:32.379176       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1209 01:59:33.335029       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1209 01:59:33.379202       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1209 01:59:33.399656       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1209 02:01:08.693028       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.245.66"}
	
	
	==> kube-controller-manager [4a3b82a29bc88ba34fdd0a63cfa749adabfbbce5ee66a7027143a11789da78ba] <==
	I1209 01:59:39.318114       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1209 01:59:40.794263       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 01:59:40.795727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 01:59:41.014658       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 01:59:41.015830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 01:59:43.113962       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 01:59:43.115036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 01:59:51.375832       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 01:59:51.377508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 01:59:51.727942       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 01:59:51.729082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 01:59:53.498247       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 01:59:53.499268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 02:00:09.092535       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 02:00:09.093945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 02:00:10.738639       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 02:00:10.739759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 02:00:13.109683       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 02:00:13.111686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 02:00:38.862688       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 02:00:38.863766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 02:00:44.238065       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 02:00:44.239191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1209 02:00:59.171108       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1209 02:00:59.172840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [6720b9b4382c48c64fdec86c2fd0596e617c82196ba5f4b5489e136a804fc6fb] <==
	I1209 01:56:41.667158       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 01:56:41.770763       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 01:56:41.770817       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.107"]
	E1209 01:56:41.770905       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 01:56:41.913406       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1209 01:56:41.913504       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 01:56:41.913533       1 server_linux.go:132] "Using iptables Proxier"
	I1209 01:56:41.961751       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 01:56:41.962992       1 server.go:527] "Version info" version="v1.34.2"
	I1209 01:56:41.963007       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 01:56:41.979972       1 config.go:200] "Starting service config controller"
	I1209 01:56:41.980086       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 01:56:41.980114       1 config.go:106] "Starting endpoint slice config controller"
	I1209 01:56:41.980117       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 01:56:41.980127       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 01:56:41.980131       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 01:56:41.986531       1 config.go:309] "Starting node config controller"
	I1209 01:56:41.993120       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 01:56:41.993139       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 01:56:42.080271       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 01:56:42.080347       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1209 01:56:42.081078       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7ecfa308eea4cdeebe3c9474876bba25ef96e20f8e8cf4305f0bf1a32112ee5b] <==
	E1209 01:56:31.341216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1209 01:56:31.341271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1209 01:56:31.341332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1209 01:56:31.341383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1209 01:56:31.341498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1209 01:56:31.341631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1209 01:56:31.341692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1209 01:56:31.345079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1209 01:56:31.345218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1209 01:56:31.345517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1209 01:56:31.345628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1209 01:56:31.345673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1209 01:56:31.345671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1209 01:56:31.345741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1209 01:56:32.179215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1209 01:56:32.220888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1209 01:56:32.248416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1209 01:56:32.248500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1209 01:56:32.256655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1209 01:56:32.259745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1209 01:56:32.290689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1209 01:56:32.324893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1209 01:56:32.331508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1209 01:56:32.411075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1209 01:56:35.027431       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 09 01:59:36 addons-712341 kubelet[1520]: I1209 01:59:36.015643    1520 scope.go:117] "RemoveContainer" containerID="bff4550f52da7913f39c959e5ee03c17b88732e9fee0da2208440c0a9a1f2b70"
	Dec 09 01:59:36 addons-712341 kubelet[1520]: I1209 01:59:36.139811    1520 scope.go:117] "RemoveContainer" containerID="c7ae4df8cc2814561ba847252e260884dd3d7d2e529f06fd0777f671bfccfc58"
	Dec 09 01:59:36 addons-712341 kubelet[1520]: I1209 01:59:36.261715    1520 scope.go:117] "RemoveContainer" containerID="4268b5239a5866c7c04b11e1a8e21f9cd0c8d1dbfd623f94d78d7e16e5646214"
	Dec 09 01:59:36 addons-712341 kubelet[1520]: I1209 01:59:36.380910    1520 scope.go:117] "RemoveContainer" containerID="65e0709f2bd968c6ef390b89c68a0e5cc4e56c9c0b6a1c830a41130c46cdbfe1"
	Dec 09 01:59:44 addons-712341 kubelet[1520]: E1209 01:59:44.181694    1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765245584177257750 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 09 01:59:44 addons-712341 kubelet[1520]: E1209 01:59:44.181719    1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765245584177257750 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 09 01:59:54 addons-712341 kubelet[1520]: E1209 01:59:54.184196    1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765245594183723092 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 09 01:59:54 addons-712341 kubelet[1520]: E1209 01:59:54.184228    1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765245594183723092 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 09 02:00:04 addons-712341 kubelet[1520]: E1209 02:00:04.187097    1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765245604186764194 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 09 02:00:04 addons-712341 kubelet[1520]: E1209 02:00:04.187127    1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765245604186764194 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 09 02:00:14 addons-712341 kubelet[1520]: E1209 02:00:14.189307    1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765245614188984154 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 09 02:00:14 addons-712341 kubelet[1520]: E1209 02:00:14.189328    1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765245614188984154 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 09 02:00:24 addons-712341 kubelet[1520]: E1209 02:00:24.192705    1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765245624192302673 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 09 02:00:24 addons-712341 kubelet[1520]: E1209 02:00:24.192729    1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765245624192302673 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 09 02:00:27 addons-712341 kubelet[1520]: I1209 02:00:27.867068    1520 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-v9zls" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 02:00:34 addons-712341 kubelet[1520]: E1209 02:00:34.195757    1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765245634195274055 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 09 02:00:34 addons-712341 kubelet[1520]: E1209 02:00:34.195816    1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765245634195274055 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 09 02:00:44 addons-712341 kubelet[1520]: E1209 02:00:44.198972    1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765245644198491809 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 09 02:00:44 addons-712341 kubelet[1520]: E1209 02:00:44.199006    1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765245644198491809 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 09 02:00:50 addons-712341 kubelet[1520]: I1209 02:00:50.865905    1520 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 02:00:54 addons-712341 kubelet[1520]: E1209 02:00:54.201975    1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765245654201421824 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 09 02:00:54 addons-712341 kubelet[1520]: E1209 02:00:54.202018    1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765245654201421824 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 09 02:01:04 addons-712341 kubelet[1520]: E1209 02:01:04.205101    1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765245664204500677 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 09 02:01:04 addons-712341 kubelet[1520]: E1209 02:01:04.205308    1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765245664204500677 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
	Dec 09 02:01:08 addons-712341 kubelet[1520]: I1209 02:01:08.677649    1520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x28j\" (UniqueName: \"kubernetes.io/projected/c662e72a-cc05-4c42-9e4a-0643c57478d7-kube-api-access-9x28j\") pod \"hello-world-app-5d498dc89-lmwdx\" (UID: \"c662e72a-cc05-4c42-9e4a-0643c57478d7\") " pod="default/hello-world-app-5d498dc89-lmwdx"
	
	
	==> storage-provisioner [260555cd1575816836ddb050ffe5036a4263d87790b0362a7a833bdf6d25fdb5] <==
	W1209 02:00:44.531410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:00:46.535498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:00:46.541649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:00:48.546474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:00:48.554210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:00:50.558572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:00:50.564751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:00:52.568212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:00:52.575686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:00:54.580056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:00:54.586887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:00:56.589898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:00:56.596446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:00:58.600113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:00:58.606136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:01:00.610528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:01:00.618113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:01:02.622638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:01:02.629642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:01:04.632884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:01:04.639076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:01:06.646721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:01:06.655279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:01:08.675743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:01:08.719481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-712341 -n addons-712341
helpers_test.go:269: (dbg) Run:  kubectl --context addons-712341 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-lmwdx ingress-nginx-admission-create-7bf82 ingress-nginx-admission-patch-d4sv2
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-712341 describe pod hello-world-app-5d498dc89-lmwdx ingress-nginx-admission-create-7bf82 ingress-nginx-admission-patch-d4sv2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-712341 describe pod hello-world-app-5d498dc89-lmwdx ingress-nginx-admission-create-7bf82 ingress-nginx-admission-patch-d4sv2: exit status 1 (80.538643ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-lmwdx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-712341/192.168.39.107
	Start Time:       Tue, 09 Dec 2025 02:01:08 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9x28j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9x28j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-lmwdx to addons-712341
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7bf82" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-d4sv2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-712341 describe pod hello-world-app-5d498dc89-lmwdx ingress-nginx-admission-create-7bf82 ingress-nginx-admission-patch-d4sv2: exit status 1
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-712341 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-amd64 -p addons-712341 addons disable ingress-dns --alsologtostderr -v=1: (1.087299583s)
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-712341 addons disable ingress --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-amd64 -p addons-712341 addons disable ingress --alsologtostderr -v=1: (7.819309488s)
--- FAIL: TestAddons/parallel/Ingress (159.91s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-545294 --alsologtostderr -v=1]
E1209 02:08:08.553816  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-545294 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-545294 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-545294 --alsologtostderr -v=1] stderr:
I1209 02:07:50.395066  265481 out.go:360] Setting OutFile to fd 1 ...
I1209 02:07:50.395328  265481 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:07:50.395338  265481 out.go:374] Setting ErrFile to fd 2...
I1209 02:07:50.395342  265481 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:07:50.395559  265481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
I1209 02:07:50.395857  265481 mustload.go:66] Loading cluster: functional-545294
I1209 02:07:50.396258  265481 config.go:182] Loaded profile config "functional-545294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:07:50.398328  265481 host.go:66] Checking if "functional-545294" exists ...
I1209 02:07:50.398538  265481 api_server.go:166] Checking apiserver status ...
I1209 02:07:50.398584  265481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1209 02:07:50.400994  265481 main.go:143] libmachine: domain functional-545294 has defined MAC address 52:54:00:47:35:52 in network mk-functional-545294
I1209 02:07:50.401445  265481 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:47:35:52", ip: ""} in network mk-functional-545294: {Iface:virbr1 ExpiryTime:2025-12-09 03:03:50 +0000 UTC Type:0 Mac:52:54:00:47:35:52 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:functional-545294 Clientid:01:52:54:00:47:35:52}
I1209 02:07:50.401478  265481 main.go:143] libmachine: domain functional-545294 has defined IP address 192.168.39.184 and MAC address 52:54:00:47:35:52 in network mk-functional-545294
I1209 02:07:50.401674  265481 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/functional-545294/id_rsa Username:docker}
I1209 02:07:50.506816  265481 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6496/cgroup
W1209 02:07:50.520597  265481 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6496/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1209 02:07:50.520659  265481 ssh_runner.go:195] Run: ls
I1209 02:07:50.526714  265481 api_server.go:253] Checking apiserver healthz at https://192.168.39.184:8441/healthz ...
I1209 02:07:50.531969  265481 api_server.go:279] https://192.168.39.184:8441/healthz returned 200:
ok
W1209 02:07:50.532039  265481 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1209 02:07:50.532216  265481 config.go:182] Loaded profile config "functional-545294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:07:50.532237  265481 addons.go:70] Setting dashboard=true in profile "functional-545294"
I1209 02:07:50.532244  265481 addons.go:239] Setting addon dashboard=true in "functional-545294"
I1209 02:07:50.532269  265481 host.go:66] Checking if "functional-545294" exists ...
I1209 02:07:50.536343  265481 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1209 02:07:50.537818  265481 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1209 02:07:50.539043  265481 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1209 02:07:50.539062  265481 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1209 02:07:50.541886  265481 main.go:143] libmachine: domain functional-545294 has defined MAC address 52:54:00:47:35:52 in network mk-functional-545294
I1209 02:07:50.542352  265481 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:47:35:52", ip: ""} in network mk-functional-545294: {Iface:virbr1 ExpiryTime:2025-12-09 03:03:50 +0000 UTC Type:0 Mac:52:54:00:47:35:52 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:functional-545294 Clientid:01:52:54:00:47:35:52}
I1209 02:07:50.542373  265481 main.go:143] libmachine: domain functional-545294 has defined IP address 192.168.39.184 and MAC address 52:54:00:47:35:52 in network mk-functional-545294
I1209 02:07:50.542507  265481 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/functional-545294/id_rsa Username:docker}
I1209 02:07:50.664437  265481 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1209 02:07:50.664473  265481 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1209 02:07:50.695203  265481 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1209 02:07:50.695238  265481 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1209 02:07:50.725937  265481 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1209 02:07:50.725973  265481 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1209 02:07:50.753431  265481 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1209 02:07:50.753465  265481 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1209 02:07:50.780534  265481 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1209 02:07:50.780572  265481 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1209 02:07:50.810375  265481 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1209 02:07:50.810411  265481 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1209 02:07:50.838909  265481 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1209 02:07:50.838955  265481 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1209 02:07:50.871765  265481 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1209 02:07:50.871800  265481 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1209 02:07:50.896063  265481 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1209 02:07:50.896096  265481 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1209 02:07:50.925002  265481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1209 02:07:51.818235  265481 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-545294 addons enable metrics-server

                                                
                                                
I1209 02:07:51.819515  265481 addons.go:202] Writing out "functional-545294" config to set dashboard=true...
W1209 02:07:51.819835  265481 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1209 02:07:51.820586  265481 kapi.go:59] client config for functional-545294: &rest.Config{Host:"https://192.168.39.184:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.key", CAFile:"/home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28162e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1209 02:07:51.821131  265481 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1209 02:07:51.821159  265481 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1209 02:07:51.821165  265481 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1209 02:07:51.821174  265481 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1209 02:07:51.821183  265481 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1209 02:07:51.831842  265481 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  97692b4e-c478-443c-9353-4434126bafa6 881 0 2025-12-09 02:07:51 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-09 02:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.100.101.45,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.100.101.45],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1209 02:07:51.832010  265481 out.go:285] * Launching proxy ...
* Launching proxy ...
I1209 02:07:51.832092  265481 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-545294 proxy --port 36195]
I1209 02:07:51.832562  265481 dashboard.go:159] Waiting for kubectl to output host:port ...
I1209 02:07:51.879677  265481 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1209 02:07:51.879770  265481 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1209 02:07:51.898890  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8d3c9b57-a255-408c-b11b-025f84a9bb11] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:51 GMT]] Body:0xc0014bb800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000478a00 TLS:<nil>}
I1209 02:07:51.899000  265481 retry.go:31] will retry after 83.399µs: Temporary Error: unexpected response code: 503
I1209 02:07:51.903797  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bde55ccd-06e6-4220-a04e-be027c00a9bc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:51 GMT]] Body:0xc0016c6140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003cb400 TLS:<nil>}
I1209 02:07:51.903898  265481 retry.go:31] will retry after 142.819µs: Temporary Error: unexpected response code: 503
I1209 02:07:51.909547  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8a88d9d5-98f5-4e54-8f3c-2748964eb41b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:51 GMT]] Body:0xc0014bb900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000478c80 TLS:<nil>}
I1209 02:07:51.909616  265481 retry.go:31] will retry after 206.331µs: Temporary Error: unexpected response code: 503
I1209 02:07:51.915655  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e3e970ae-d4c6-4058-880f-124bd2d2066a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:51 GMT]] Body:0xc0014bb9c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003cb7c0 TLS:<nil>}
I1209 02:07:51.915722  265481 retry.go:31] will retry after 503.502µs: Temporary Error: unexpected response code: 503
I1209 02:07:51.919793  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5e386982-9b6d-40f5-9598-2065df19c633] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:51 GMT]] Body:0xc0015a2680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003cb900 TLS:<nil>}
I1209 02:07:51.919908  265481 retry.go:31] will retry after 472.2µs: Temporary Error: unexpected response code: 503
I1209 02:07:51.923965  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[86f78fe8-c809-4f1e-be0b-60df7abefab1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:51 GMT]] Body:0xc0015a2780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208640 TLS:<nil>}
I1209 02:07:51.924027  265481 retry.go:31] will retry after 451.178µs: Temporary Error: unexpected response code: 503
I1209 02:07:51.928171  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f2a87f9b-1b90-46e4-acc3-5f7374617588] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:51 GMT]] Body:0xc0016c6240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208780 TLS:<nil>}
I1209 02:07:51.928234  265481 retry.go:31] will retry after 1.050351ms: Temporary Error: unexpected response code: 503
I1209 02:07:51.933009  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a9f9bb26-c41b-414c-9b3d-fa42b2fdcca7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:51 GMT]] Body:0xc0014bbac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000478f00 TLS:<nil>}
I1209 02:07:51.933088  265481 retry.go:31] will retry after 1.759325ms: Temporary Error: unexpected response code: 503
I1209 02:07:51.938026  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cad73e7a-6a36-46cf-a5c6-c97fc31e5eea] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:51 GMT]] Body:0xc0016c6340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003cba40 TLS:<nil>}
I1209 02:07:51.938108  265481 retry.go:31] will retry after 2.267237ms: Temporary Error: unexpected response code: 503
I1209 02:07:51.945186  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6a92970a-8bab-4995-90d9-5d431acc92e6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:51 GMT]] Body:0xc0014bbbc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000479040 TLS:<nil>}
I1209 02:07:51.945254  265481 retry.go:31] will retry after 2.984237ms: Temporary Error: unexpected response code: 503
I1209 02:07:51.954989  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[035b61b9-e041-43e7-9cf2-21edf399ee9a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:51 GMT]] Body:0xc0016c6440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003cbb80 TLS:<nil>}
I1209 02:07:51.955083  265481 retry.go:31] will retry after 4.024527ms: Temporary Error: unexpected response code: 503
I1209 02:07:51.962385  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a5394916-12ad-4b65-aa87-d4ecc8b891f5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:51 GMT]] Body:0xc0015a2880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000479180 TLS:<nil>}
I1209 02:07:51.962455  265481 retry.go:31] will retry after 10.301246ms: Temporary Error: unexpected response code: 503
I1209 02:07:51.977544  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d078d8cb-20b8-4e29-8e7d-4f0638df1ee6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:51 GMT]] Body:0xc0015a2940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208a00 TLS:<nil>}
I1209 02:07:51.977635  265481 retry.go:31] will retry after 7.060262ms: Temporary Error: unexpected response code: 503
I1209 02:07:51.989055  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[890aa169-3098-422b-9d41-7d66f62c01ac] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:51 GMT]] Body:0xc0014bbcc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208b40 TLS:<nil>}
I1209 02:07:51.989148  265481 retry.go:31] will retry after 21.686315ms: Temporary Error: unexpected response code: 503
I1209 02:07:52.021064  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[96cbc83b-b283-4689-b565-b07cc7dd4072] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:52 GMT]] Body:0xc0014bbd80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003cbe00 TLS:<nil>}
I1209 02:07:52.021161  265481 retry.go:31] will retry after 36.308486ms: Temporary Error: unexpected response code: 503
I1209 02:07:52.065029  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aa90a349-33c0-4c00-9db9-f12919f8754f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:52 GMT]] Body:0xc0015a2a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1b80 TLS:<nil>}
I1209 02:07:52.065130  265481 retry.go:31] will retry after 65.624906ms: Temporary Error: unexpected response code: 503
I1209 02:07:52.135657  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b69dde16-4c89-4616-b6cb-85ffdd67dfd5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:52 GMT]] Body:0xc0015a2b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208c80 TLS:<nil>}
I1209 02:07:52.135743  265481 retry.go:31] will retry after 59.356083ms: Temporary Error: unexpected response code: 503
I1209 02:07:52.203365  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8c9d6f19-6f48-4326-a76d-740671fdc774] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:52 GMT]] Body:0xc0016c6580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208dc0 TLS:<nil>}
I1209 02:07:52.203453  265481 retry.go:31] will retry after 141.321383ms: Temporary Error: unexpected response code: 503
I1209 02:07:52.351928  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2866246f-d701-4710-a970-51daa0cd7d08] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:52 GMT]] Body:0xc0014bbf40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004792c0 TLS:<nil>}
I1209 02:07:52.352024  265481 retry.go:31] will retry after 83.995346ms: Temporary Error: unexpected response code: 503
I1209 02:07:52.441573  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0b4ec6ce-2d16-4bf3-8b8d-e25506927320] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:52 GMT]] Body:0xc0016c6680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1cc0 TLS:<nil>}
I1209 02:07:52.441696  265481 retry.go:31] will retry after 116.903165ms: Temporary Error: unexpected response code: 503
I1209 02:07:52.562742  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7a0dbc2a-0b09-46a3-bac1-37f00311c4b5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:52 GMT]] Body:0xc0015a2bc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000479400 TLS:<nil>}
I1209 02:07:52.562858  265481 retry.go:31] will retry after 349.453005ms: Temporary Error: unexpected response code: 503
I1209 02:07:52.917277  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[48bb3cb8-0a29-4baa-9292-051539bbb812] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:52 GMT]] Body:0xc00176e0c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209040 TLS:<nil>}
I1209 02:07:52.917353  265481 retry.go:31] will retry after 653.06174ms: Temporary Error: unexpected response code: 503
I1209 02:07:53.574639  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e405cfc5-1947-4e85-b488-02533ae23d68] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:53 GMT]] Body:0xc0016c67c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1e00 TLS:<nil>}
I1209 02:07:53.574759  265481 retry.go:31] will retry after 1.053706333s: Temporary Error: unexpected response code: 503
I1209 02:07:54.632696  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1eea0f35-259a-414d-bbdd-2e4c7630206c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:54 GMT]] Body:0xc0015a2cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000479540 TLS:<nil>}
I1209 02:07:54.632788  265481 retry.go:31] will retry after 1.526590705s: Temporary Error: unexpected response code: 503
I1209 02:07:56.164477  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[08db26eb-67c3-4691-9898-cd983d00d814] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:56 GMT]] Body:0xc0016c68c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001776000 TLS:<nil>}
I1209 02:07:56.164564  265481 retry.go:31] will retry after 1.430234232s: Temporary Error: unexpected response code: 503
I1209 02:07:57.599653  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5651ccc3-f640-4121-969e-d86a001237d1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:57 GMT]] Body:0xc0016c6980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000479680 TLS:<nil>}
I1209 02:07:57.599736  265481 retry.go:31] will retry after 2.102144255s: Temporary Error: unexpected response code: 503
I1209 02:07:59.707587  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[61b56707-8cb7-4a72-9f6f-b0232af75fe0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:07:59 GMT]] Body:0xc0015a2d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004797c0 TLS:<nil>}
I1209 02:07:59.707660  265481 retry.go:31] will retry after 4.373502413s: Temporary Error: unexpected response code: 503
I1209 02:08:04.087005  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7e49251b-7ae6-4248-a991-bdff51f583e7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:08:04 GMT]] Body:0xc00176e200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209900 TLS:<nil>}
I1209 02:08:04.087077  265481 retry.go:31] will retry after 7.942211204s: Temporary Error: unexpected response code: 503
I1209 02:08:12.036573  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9a5d6e73-7532-4561-bc34-bd60e24255f1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:08:12 GMT]] Body:0xc00176e280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000479900 TLS:<nil>}
I1209 02:08:12.036662  265481 retry.go:31] will retry after 6.064992682s: Temporary Error: unexpected response code: 503
I1209 02:08:18.108159  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[40d27eeb-52b7-456d-9614-e9c0b0cd7e88] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:08:18 GMT]] Body:0xc0015a2e40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001776140 TLS:<nil>}
I1209 02:08:18.108229  265481 retry.go:31] will retry after 8.27502882s: Temporary Error: unexpected response code: 503
I1209 02:08:26.387855  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ac77993c-839f-4506-99cb-f8c6eaabf9dc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:08:26 GMT]] Body:0xc0016c6b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00178c000 TLS:<nil>}
I1209 02:08:26.387955  265481 retry.go:31] will retry after 9.752577396s: Temporary Error: unexpected response code: 503
I1209 02:08:36.144944  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7c83df68-63d3-439d-b4b4-aa20edfa8bf4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:08:36 GMT]] Body:0xc0015a2f80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00178c140 TLS:<nil>}
I1209 02:08:36.145008  265481 retry.go:31] will retry after 31.313316566s: Temporary Error: unexpected response code: 503
I1209 02:09:07.464233  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bec44dfa-5bdb-4599-8a08-c1f63e9fb6f6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:09:07 GMT]] Body:0xc0016c6c00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00178c280 TLS:<nil>}
I1209 02:09:07.464315  265481 retry.go:31] will retry after 34.884557228s: Temporary Error: unexpected response code: 503
I1209 02:09:42.352919  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c6618254-0928-4aa3-a792-bada8aecedbf] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:09:42 GMT]] Body:0xc0015a3080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001776280 TLS:<nil>}
I1209 02:09:42.352994  265481 retry.go:31] will retry after 32.772603373s: Temporary Error: unexpected response code: 503
I1209 02:10:15.132733  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[91a6bc40-9ab5-43f8-86e0-84c0c06dbe26] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:10:15 GMT]] Body:0xc0016c6080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00178c3c0 TLS:<nil>}
I1209 02:10:15.132818  265481 retry.go:31] will retry after 1m3.939256991s: Temporary Error: unexpected response code: 503
I1209 02:11:19.079388  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[74705fa4-78da-4a20-9c6f-de31ccb77167] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:11:19 GMT]] Body:0xc0015a20c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000478000 TLS:<nil>}
I1209 02:11:19.079505  265481 retry.go:31] will retry after 1m12.043744248s: Temporary Error: unexpected response code: 503
I1209 02:12:31.127632  265481 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[37eb15cc-e447-4880-88c7-7c29e0bd41dd] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 09 Dec 2025 02:12:31 GMT]] Body:0xc0016c6100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000478500 TLS:<nil>}
I1209 02:12:31.127743  265481 retry.go:31] will retry after 1m24.986343247s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-545294 -n functional-545294
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-545294 logs -n 25: (1.530420685s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                  ARGS                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-545294 ssh findmnt -T /mount1                                                                │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │                     │
	│ ssh            │ functional-545294 ssh findmnt -T /mount1                                                                │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │ 09 Dec 25 02:07 UTC │
	│ ssh            │ functional-545294 ssh findmnt -T /mount2                                                                │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │ 09 Dec 25 02:07 UTC │
	│ ssh            │ functional-545294 ssh findmnt -T /mount3                                                                │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │ 09 Dec 25 02:07 UTC │
	│ mount          │ -p functional-545294 --kill=true                                                                        │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │                     │
	│ ssh            │ functional-545294 ssh sudo cat /etc/ssl/certs/258854.pem                                                │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │ 09 Dec 25 02:07 UTC │
	│ ssh            │ functional-545294 ssh sudo cat /usr/share/ca-certificates/258854.pem                                    │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │ 09 Dec 25 02:07 UTC │
	│ ssh            │ functional-545294 ssh sudo cat /etc/ssl/certs/51391683.0                                                │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │ 09 Dec 25 02:07 UTC │
	│ ssh            │ functional-545294 ssh sudo cat /etc/ssl/certs/2588542.pem                                               │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │ 09 Dec 25 02:07 UTC │
	│ ssh            │ functional-545294 ssh sudo cat /usr/share/ca-certificates/2588542.pem                                   │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │ 09 Dec 25 02:07 UTC │
	│ ssh            │ functional-545294 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │ 09 Dec 25 02:07 UTC │
	│ start          │ -p functional-545294 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │                     │
	│ start          │ -p functional-545294 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio           │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │                     │
	│ ssh            │ functional-545294 ssh sudo cat /etc/test/nested/copy/258854/hosts                                       │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │ 09 Dec 25 02:07 UTC │
	│ dashboard      │ --url --port 36195 -p functional-545294 --alsologtostderr -v=1                                          │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │                     │
	│ image          │ functional-545294 image ls --format short --alsologtostderr                                             │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:08 UTC │ 09 Dec 25 02:08 UTC │
	│ image          │ functional-545294 image ls --format yaml --alsologtostderr                                              │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:08 UTC │ 09 Dec 25 02:08 UTC │
	│ ssh            │ functional-545294 ssh pgrep buildkitd                                                                   │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:08 UTC │                     │
	│ image          │ functional-545294 image build -t localhost/my-image:functional-545294 testdata/build --alsologtostderr  │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:08 UTC │ 09 Dec 25 02:08 UTC │
	│ image          │ functional-545294 image ls                                                                              │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:08 UTC │ 09 Dec 25 02:08 UTC │
	│ image          │ functional-545294 image ls --format json --alsologtostderr                                              │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:08 UTC │ 09 Dec 25 02:08 UTC │
	│ image          │ functional-545294 image ls --format table --alsologtostderr                                             │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:08 UTC │ 09 Dec 25 02:08 UTC │
	│ update-context │ functional-545294 update-context --alsologtostderr -v=2                                                 │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:08 UTC │ 09 Dec 25 02:08 UTC │
	│ update-context │ functional-545294 update-context --alsologtostderr -v=2                                                 │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:08 UTC │ 09 Dec 25 02:08 UTC │
	│ update-context │ functional-545294 update-context --alsologtostderr -v=2                                                 │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:08 UTC │ 09 Dec 25 02:08 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:07:41
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:07:41.399695  265360 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:07:41.400095  265360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:07:41.400115  265360 out.go:374] Setting ErrFile to fd 2...
	I1209 02:07:41.400123  265360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:07:41.400441  265360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 02:07:41.400982  265360 out.go:368] Setting JSON to false
	I1209 02:07:41.402038  265360 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28211,"bootTime":1765217850,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:07:41.402104  265360 start.go:143] virtualization: kvm guest
	I1209 02:07:41.404511  265360 out.go:179] * [functional-545294] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:07:41.406041  265360 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:07:41.406047  265360 notify.go:221] Checking for updates...
	I1209 02:07:41.409129  265360 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:07:41.413797  265360 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 02:07:41.415477  265360 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	I1209 02:07:41.416913  265360 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:07:41.418187  265360 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:07:41.419756  265360 config.go:182] Loaded profile config "functional-545294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:07:41.420298  265360 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:07:41.454123  265360 out.go:179] * Using the kvm2 driver based on existing profile
	I1209 02:07:41.455559  265360 start.go:309] selected driver: kvm2
	I1209 02:07:41.455578  265360 start.go:927] validating driver "kvm2" against &{Name:functional-545294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-545294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:07:41.455788  265360 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:07:41.456977  265360 cni.go:84] Creating CNI manager for ""
	I1209 02:07:41.457046  265360 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 02:07:41.457106  265360 start.go:353] cluster config:
	{Name:functional-545294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-545294 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:07:41.458780  265360 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.167792531Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765246371167768240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:240114,},InodesUsed:&UInt64Value{Value:104,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b52d275c-3791-416d-81e5-a4aafbb5fa0d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.168808258Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d93aabf1-dc33-4be6-94a8-99d5438b3be0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.168880256Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d93aabf1-dc33-4be6-94a8-99d5438b3be0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.169274014Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47025c8e887c9b1c3ff3b4d7ad309846f91891a5cfc411e153c7bea0e23bdd24,PodSandboxId:831e808d811a5b3edbc977765dfe4d922f201a07cb785f406d402f2e2138e496,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765246101229560357,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-nbwpp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0f362234-70c0-47ff-afab-c6cb6c695ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291f1e39539ff7d651b830e7e042f1b9fcdb535d35c3bb69037513f9a244efe0,PodSandboxId:270dd75437f03d4bdcdca5fab35a5917a40402319586d912142502218051eb5a,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765246064745716091,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c8feca44-44ed-4eb4-8817-2e317d08cf50,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ffd46bc92d655ae18972da483ae0b30262242aca0b5bbbf3d3e198e2b48fbc,PodSandboxId:4229a9648f0388b0c24f25f7aa77aacf8bd14e2cd4c7af11f8e5436d683ca9b2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765246053617985118,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ebc811a7-8de7-4db7-ab8a-6466ab5be638,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb11585f8efb9685c531517ce6af0bd09674e3b5074d553d77e853bbacd7d794,PodSandboxId:b0d4c688e00ac0c63523f99e02d00e98818433a710144919e971d2dfc83f7472,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765245970662252981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gzjhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ccb903-d2f9-4e8c-b9fe-25384e736e56,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0eb6016c3c8701b48b1c0daf6bd0ec2d9b246d60f5a2d55046e6c67d5e54cd,PodSandboxId:93dd276d617f88e3fcb4ba99fec9ce51d91246375b2553a08d899efea75fb28f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CO
NTAINER_RUNNING,CreatedAt:1765245970630464144,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da85099-845e-43c0-abe3-694b2e59c644,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32e467a00eda98540f5fcd1ff5ac24480720d3fe71af69bd08b899810f97631,PodSandboxId:549a6ed604b063b938220bc6c839a85ec7d5c9fa23b1991faf03dd3a416180ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,Creat
edAt:1765245970581962315,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zwr8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17f41900-ed85-412a-b753-83ab0612a0d0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600d2ecc96b44f7021b08ce2b5646ec800acfbb9b54d330fe059d5883db48b2e,PodSandboxId:06a38f77b61a47f375fe95f6b21b3046fbdf10582e34e2a4260019e558d8e573,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765245965739866434,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff84ec456446fa53da87afee0f115e3f,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0561cdb1d940f72d9593f84899a0b04ab988d3ad75efe7b4ce43e03d904a29d,PodSandboxId:6e0a2877b725072401fab9146721d08dc56a4c4a81b3fbda297cc79dd8a963f1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8
bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765245965724590043,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfde71ed46fcf3e1413b4854831f8db9,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0150a2026dd7fbd8eef1be4f1149d3e711240afe63836d16882d2d4d2cd9d575,PodSandboxId:24063f0110f95e10240d247cba620e91ca39c134e08071df109b36ed7d5d6662,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581
b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765245965754099763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5986582f384e8b76964852dba738451,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33019257a9a137de2f80ac7281f6d3349e5a11dccb52dfd133715a38715898e6,PodSandboxId:9bafe39d1fb4e9eca47c6e6b77ed0e9de1020797cc92de67eac433fe3f043fcc,Metadata:&ContainerMetada
ta{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765245965692497170,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d49e5f2eca05156c73661728e6bef94,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0791a3f540c43064eea797dadcd0d0b96a847e6c2b7d19bc79595a654597
4c2c,PodSandboxId:b57d9da4d6ff79239594fec3344c2d956d8f4be7ce4466409183d6f31fe170ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765245925898872103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gzjhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ccb903-d2f9-4e8c-b9fe-25384e736e56,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness
-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a691542f0f67679391b917d59ffd9d5fe7c07b0b46dab363cd0d823a78d97cc,PodSandboxId:44f15f5505463fe9d6adebc9fdd36d278e59ed90cd5ac660218c41fafdef026d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765245925874479288,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zwr8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17f41900-ed85-412a-b753-83ab0612a0d0,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b94476afa73e7f5b21730d0829202d1d6524a0e811201d33665db75577741924,PodSandboxId:7471f0469bc616aead498ee3784d1390a01a97599c3d5a5318444e7deaa41c20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765245925885367398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da85099-845e-43c0-abe3-694b2e59c644,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bd02c96f2d3339fa9d3bd443fc7063b2baf4b197fa5941beab45b59e9f1e71,PodSandboxId:c48aca2827221a2e8fa145591415bf88bceb762f5b087eed3f0844db1fdbb470,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765245921265200501,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfde71ed46fcf3e1413b4854831f8db9,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5e268ecdaf2fde98e798be33bab31cafba814db74814a43faf6667e7ce4f8c,PodSandboxId:c758dc0b99425c52831bdd825fbfbebf76a7560fb6bd8235f50f606ef4dd19d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765245921241204939,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-545294,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: ff84ec456446fa53da87afee0f115e3f,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04529d5f4a6494a40a53f85e66394b0cccf5a9dfa167ac714917d9c21812746c,PodSandboxId:72521b6a719a676feec078bcc76ce9308b46e162a4af8f254a06efb00c8f8e55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765245921228615658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5986582f384e8b76964852dba738451,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d93aabf1-dc33-4be6-94a8-99d5438b3be0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.216563542Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=57197d93-e351-4e4b-afc8-ba7a8d53ef57 name=/runtime.v1.RuntimeService/Version
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.217140343Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=57197d93-e351-4e4b-afc8-ba7a8d53ef57 name=/runtime.v1.RuntimeService/Version
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.218645944Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5508a871-147e-4197-99cf-561ea22e1fba name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.219432617Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765246371219407283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:240114,},InodesUsed:&UInt64Value{Value:104,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5508a871-147e-4197-99cf-561ea22e1fba name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.220601987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bfbc833c-3257-4afd-bb90-8693930b841c name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.220790014Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bfbc833c-3257-4afd-bb90-8693930b841c name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.221194442Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47025c8e887c9b1c3ff3b4d7ad309846f91891a5cfc411e153c7bea0e23bdd24,PodSandboxId:831e808d811a5b3edbc977765dfe4d922f201a07cb785f406d402f2e2138e496,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765246101229560357,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-nbwpp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0f362234-70c0-47ff-afab-c6cb6c695ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291f1e39539ff7d651b830e7e042f1b9fcdb535d35c3bb69037513f9a244efe0,PodSandboxId:270dd75437f03d4bdcdca5fab35a5917a40402319586d912142502218051eb5a,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765246064745716091,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c8feca44-44ed-4eb4-8817-2e317d08cf50,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ffd46bc92d655ae18972da483ae0b30262242aca0b5bbbf3d3e198e2b48fbc,PodSandboxId:4229a9648f0388b0c24f25f7aa77aacf8bd14e2cd4c7af11f8e5436d683ca9b2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765246053617985118,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ebc811a7-8de7-4db7-ab8a-6466ab5be638,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb11585f8efb9685c531517ce6af0bd09674e3b5074d553d77e853bbacd7d794,PodSandboxId:b0d4c688e00ac0c63523f99e02d00e98818433a710144919e971d2dfc83f7472,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765245970662252981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gzjhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ccb903-d2f9-4e8c-b9fe-25384e736e56,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0eb6016c3c8701b48b1c0daf6bd0ec2d9b246d60f5a2d55046e6c67d5e54cd,PodSandboxId:93dd276d617f88e3fcb4ba99fec9ce51d91246375b2553a08d899efea75fb28f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CO
NTAINER_RUNNING,CreatedAt:1765245970630464144,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da85099-845e-43c0-abe3-694b2e59c644,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32e467a00eda98540f5fcd1ff5ac24480720d3fe71af69bd08b899810f97631,PodSandboxId:549a6ed604b063b938220bc6c839a85ec7d5c9fa23b1991faf03dd3a416180ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,Creat
edAt:1765245970581962315,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zwr8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17f41900-ed85-412a-b753-83ab0612a0d0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600d2ecc96b44f7021b08ce2b5646ec800acfbb9b54d330fe059d5883db48b2e,PodSandboxId:06a38f77b61a47f375fe95f6b21b3046fbdf10582e34e2a4260019e558d8e573,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765245965739866434,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff84ec456446fa53da87afee0f115e3f,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0561cdb1d940f72d9593f84899a0b04ab988d3ad75efe7b4ce43e03d904a29d,PodSandboxId:6e0a2877b725072401fab9146721d08dc56a4c4a81b3fbda297cc79dd8a963f1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8
bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765245965724590043,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfde71ed46fcf3e1413b4854831f8db9,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0150a2026dd7fbd8eef1be4f1149d3e711240afe63836d16882d2d4d2cd9d575,PodSandboxId:24063f0110f95e10240d247cba620e91ca39c134e08071df109b36ed7d5d6662,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581
b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765245965754099763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5986582f384e8b76964852dba738451,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33019257a9a137de2f80ac7281f6d3349e5a11dccb52dfd133715a38715898e6,PodSandboxId:9bafe39d1fb4e9eca47c6e6b77ed0e9de1020797cc92de67eac433fe3f043fcc,Metadata:&ContainerMetada
ta{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765245965692497170,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d49e5f2eca05156c73661728e6bef94,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0791a3f540c43064eea797dadcd0d0b96a847e6c2b7d19bc79595a654597
4c2c,PodSandboxId:b57d9da4d6ff79239594fec3344c2d956d8f4be7ce4466409183d6f31fe170ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765245925898872103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gzjhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ccb903-d2f9-4e8c-b9fe-25384e736e56,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness
-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a691542f0f67679391b917d59ffd9d5fe7c07b0b46dab363cd0d823a78d97cc,PodSandboxId:44f15f5505463fe9d6adebc9fdd36d278e59ed90cd5ac660218c41fafdef026d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765245925874479288,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zwr8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17f41900-ed85-412a-b753-83ab0612a0d0,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b94476afa73e7f5b21730d0829202d1d6524a0e811201d33665db75577741924,PodSandboxId:7471f0469bc616aead498ee3784d1390a01a97599c3d5a5318444e7deaa41c20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765245925885367398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da85099-845e-43c0-abe3-694b2e59c644,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bd02c96f2d3339fa9d3bd443fc7063b2baf4b197fa5941beab45b59e9f1e71,PodSandboxId:c48aca2827221a2e8fa145591415bf88bceb762f5b087eed3f0844db1fdbb470,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765245921265200501,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfde71ed46fcf3e1413b4854831f8db9,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5e268ecdaf2fde98e798be33bab31cafba814db74814a43faf6667e7ce4f8c,PodSandboxId:c758dc0b99425c52831bdd825fbfbebf76a7560fb6bd8235f50f606ef4dd19d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765245921241204939,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-545294,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: ff84ec456446fa53da87afee0f115e3f,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04529d5f4a6494a40a53f85e66394b0cccf5a9dfa167ac714917d9c21812746c,PodSandboxId:72521b6a719a676feec078bcc76ce9308b46e162a4af8f254a06efb00c8f8e55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765245921228615658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5986582f384e8b76964852dba738451,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bfbc833c-3257-4afd-bb90-8693930b841c name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.255716441Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5071a685-a6aa-425a-a947-91dc5fa8a854 name=/runtime.v1.RuntimeService/Version
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.255807841Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5071a685-a6aa-425a-a947-91dc5fa8a854 name=/runtime.v1.RuntimeService/Version
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.257716830Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=36f48a59-693f-4f86-a991-c5c753f81e89 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.258814713Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765246371258786789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:240114,},InodesUsed:&UInt64Value{Value:104,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36f48a59-693f-4f86-a991-c5c753f81e89 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.259976506Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cbb9a99e-b6ad-4ec8-83bf-de3e45b39dcd name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.260118601Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cbb9a99e-b6ad-4ec8-83bf-de3e45b39dcd name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.260516438Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47025c8e887c9b1c3ff3b4d7ad309846f91891a5cfc411e153c7bea0e23bdd24,PodSandboxId:831e808d811a5b3edbc977765dfe4d922f201a07cb785f406d402f2e2138e496,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765246101229560357,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-nbwpp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0f362234-70c0-47ff-afab-c6cb6c695ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291f1e39539ff7d651b830e7e042f1b9fcdb535d35c3bb69037513f9a244efe0,PodSandboxId:270dd75437f03d4bdcdca5fab35a5917a40402319586d912142502218051eb5a,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765246064745716091,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c8feca44-44ed-4eb4-8817-2e317d08cf50,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ffd46bc92d655ae18972da483ae0b30262242aca0b5bbbf3d3e198e2b48fbc,PodSandboxId:4229a9648f0388b0c24f25f7aa77aacf8bd14e2cd4c7af11f8e5436d683ca9b2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765246053617985118,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ebc811a7-8de7-4db7-ab8a-6466ab5be638,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb11585f8efb9685c531517ce6af0bd09674e3b5074d553d77e853bbacd7d794,PodSandboxId:b0d4c688e00ac0c63523f99e02d00e98818433a710144919e971d2dfc83f7472,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765245970662252981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gzjhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ccb903-d2f9-4e8c-b9fe-25384e736e56,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0eb6016c3c8701b48b1c0daf6bd0ec2d9b246d60f5a2d55046e6c67d5e54cd,PodSandboxId:93dd276d617f88e3fcb4ba99fec9ce51d91246375b2553a08d899efea75fb28f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CO
NTAINER_RUNNING,CreatedAt:1765245970630464144,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da85099-845e-43c0-abe3-694b2e59c644,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32e467a00eda98540f5fcd1ff5ac24480720d3fe71af69bd08b899810f97631,PodSandboxId:549a6ed604b063b938220bc6c839a85ec7d5c9fa23b1991faf03dd3a416180ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,Creat
edAt:1765245970581962315,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zwr8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17f41900-ed85-412a-b753-83ab0612a0d0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600d2ecc96b44f7021b08ce2b5646ec800acfbb9b54d330fe059d5883db48b2e,PodSandboxId:06a38f77b61a47f375fe95f6b21b3046fbdf10582e34e2a4260019e558d8e573,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765245965739866434,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff84ec456446fa53da87afee0f115e3f,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0561cdb1d940f72d9593f84899a0b04ab988d3ad75efe7b4ce43e03d904a29d,PodSandboxId:6e0a2877b725072401fab9146721d08dc56a4c4a81b3fbda297cc79dd8a963f1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8
bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765245965724590043,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfde71ed46fcf3e1413b4854831f8db9,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0150a2026dd7fbd8eef1be4f1149d3e711240afe63836d16882d2d4d2cd9d575,PodSandboxId:24063f0110f95e10240d247cba620e91ca39c134e08071df109b36ed7d5d6662,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581
b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765245965754099763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5986582f384e8b76964852dba738451,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33019257a9a137de2f80ac7281f6d3349e5a11dccb52dfd133715a38715898e6,PodSandboxId:9bafe39d1fb4e9eca47c6e6b77ed0e9de1020797cc92de67eac433fe3f043fcc,Metadata:&ContainerMetada
ta{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765245965692497170,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d49e5f2eca05156c73661728e6bef94,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0791a3f540c43064eea797dadcd0d0b96a847e6c2b7d19bc79595a654597
4c2c,PodSandboxId:b57d9da4d6ff79239594fec3344c2d956d8f4be7ce4466409183d6f31fe170ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765245925898872103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gzjhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ccb903-d2f9-4e8c-b9fe-25384e736e56,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness
-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a691542f0f67679391b917d59ffd9d5fe7c07b0b46dab363cd0d823a78d97cc,PodSandboxId:44f15f5505463fe9d6adebc9fdd36d278e59ed90cd5ac660218c41fafdef026d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765245925874479288,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zwr8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17f41900-ed85-412a-b753-83ab0612a0d0,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b94476afa73e7f5b21730d0829202d1d6524a0e811201d33665db75577741924,PodSandboxId:7471f0469bc616aead498ee3784d1390a01a97599c3d5a5318444e7deaa41c20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765245925885367398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da85099-845e-43c0-abe3-694b2e59c644,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bd02c96f2d3339fa9d3bd443fc7063b2baf4b197fa5941beab45b59e9f1e71,PodSandboxId:c48aca2827221a2e8fa145591415bf88bceb762f5b087eed3f0844db1fdbb470,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765245921265200501,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfde71ed46fcf3e1413b4854831f8db9,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5e268ecdaf2fde98e798be33bab31cafba814db74814a43faf6667e7ce4f8c,PodSandboxId:c758dc0b99425c52831bdd825fbfbebf76a7560fb6bd8235f50f606ef4dd19d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765245921241204939,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-545294,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: ff84ec456446fa53da87afee0f115e3f,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04529d5f4a6494a40a53f85e66394b0cccf5a9dfa167ac714917d9c21812746c,PodSandboxId:72521b6a719a676feec078bcc76ce9308b46e162a4af8f254a06efb00c8f8e55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765245921228615658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5986582f384e8b76964852dba738451,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cbb9a99e-b6ad-4ec8-83bf-de3e45b39dcd name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.294257312Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=55151446-3fcc-48e5-9a52-ae0398341c65 name=/runtime.v1.RuntimeService/Version
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.294402220Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=55151446-3fcc-48e5-9a52-ae0398341c65 name=/runtime.v1.RuntimeService/Version
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.295954442Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a19e6d07-3338-481f-bc9f-a43f52258323 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.297387709Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765246371297263045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:240114,},InodesUsed:&UInt64Value{Value:104,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a19e6d07-3338-481f-bc9f-a43f52258323 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.299297457Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f8692d1-b378-4af3-a1bb-27fda2415424 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.299450300Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f8692d1-b378-4af3-a1bb-27fda2415424 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:12:51 functional-545294 crio[5890]: time="2025-12-09 02:12:51.299762481Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47025c8e887c9b1c3ff3b4d7ad309846f91891a5cfc411e153c7bea0e23bdd24,PodSandboxId:831e808d811a5b3edbc977765dfe4d922f201a07cb785f406d402f2e2138e496,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765246101229560357,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-nbwpp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0f362234-70c0-47ff-afab-c6cb6c695ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291f1e39539ff7d651b830e7e042f1b9fcdb535d35c3bb69037513f9a244efe0,PodSandboxId:270dd75437f03d4bdcdca5fab35a5917a40402319586d912142502218051eb5a,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765246064745716091,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c8feca44-44ed-4eb4-8817-2e317d08cf50,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ffd46bc92d655ae18972da483ae0b30262242aca0b5bbbf3d3e198e2b48fbc,PodSandboxId:4229a9648f0388b0c24f25f7aa77aacf8bd14e2cd4c7af11f8e5436d683ca9b2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765246053617985118,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ebc811a7-8de7-4db7-ab8a-6466ab5be638,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb11585f8efb9685c531517ce6af0bd09674e3b5074d553d77e853bbacd7d794,PodSandboxId:b0d4c688e00ac0c63523f99e02d00e98818433a710144919e971d2dfc83f7472,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765245970662252981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gzjhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ccb903-d2f9-4e8c-b9fe-25384e736e56,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0eb6016c3c8701b48b1c0daf6bd0ec2d9b246d60f5a2d55046e6c67d5e54cd,PodSandboxId:93dd276d617f88e3fcb4ba99fec9ce51d91246375b2553a08d899efea75fb28f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CO
NTAINER_RUNNING,CreatedAt:1765245970630464144,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da85099-845e-43c0-abe3-694b2e59c644,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32e467a00eda98540f5fcd1ff5ac24480720d3fe71af69bd08b899810f97631,PodSandboxId:549a6ed604b063b938220bc6c839a85ec7d5c9fa23b1991faf03dd3a416180ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,Creat
edAt:1765245970581962315,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zwr8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17f41900-ed85-412a-b753-83ab0612a0d0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600d2ecc96b44f7021b08ce2b5646ec800acfbb9b54d330fe059d5883db48b2e,PodSandboxId:06a38f77b61a47f375fe95f6b21b3046fbdf10582e34e2a4260019e558d8e573,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765245965739866434,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff84ec456446fa53da87afee0f115e3f,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0561cdb1d940f72d9593f84899a0b04ab988d3ad75efe7b4ce43e03d904a29d,PodSandboxId:6e0a2877b725072401fab9146721d08dc56a4c4a81b3fbda297cc79dd8a963f1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8
bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765245965724590043,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfde71ed46fcf3e1413b4854831f8db9,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0150a2026dd7fbd8eef1be4f1149d3e711240afe63836d16882d2d4d2cd9d575,PodSandboxId:24063f0110f95e10240d247cba620e91ca39c134e08071df109b36ed7d5d6662,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581
b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765245965754099763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5986582f384e8b76964852dba738451,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33019257a9a137de2f80ac7281f6d3349e5a11dccb52dfd133715a38715898e6,PodSandboxId:9bafe39d1fb4e9eca47c6e6b77ed0e9de1020797cc92de67eac433fe3f043fcc,Metadata:&ContainerMetada
ta{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765245965692497170,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d49e5f2eca05156c73661728e6bef94,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0791a3f540c43064eea797dadcd0d0b96a847e6c2b7d19bc79595a654597
4c2c,PodSandboxId:b57d9da4d6ff79239594fec3344c2d956d8f4be7ce4466409183d6f31fe170ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765245925898872103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gzjhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ccb903-d2f9-4e8c-b9fe-25384e736e56,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness
-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a691542f0f67679391b917d59ffd9d5fe7c07b0b46dab363cd0d823a78d97cc,PodSandboxId:44f15f5505463fe9d6adebc9fdd36d278e59ed90cd5ac660218c41fafdef026d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765245925874479288,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zwr8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17f41900-ed85-412a-b753-83ab0612a0d0,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b94476afa73e7f5b21730d0829202d1d6524a0e811201d33665db75577741924,PodSandboxId:7471f0469bc616aead498ee3784d1390a01a97599c3d5a5318444e7deaa41c20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765245925885367398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da85099-845e-43c0-abe3-694b2e59c644,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bd02c96f2d3339fa9d3bd443fc7063b2baf4b197fa5941beab45b59e9f1e71,PodSandboxId:c48aca2827221a2e8fa145591415bf88bceb762f5b087eed3f0844db1fdbb470,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765245921265200501,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfde71ed46fcf3e1413b4854831f8db9,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5e268ecdaf2fde98e798be33bab31cafba814db74814a43faf6667e7ce4f8c,PodSandboxId:c758dc0b99425c52831bdd825fbfbebf76a7560fb6bd8235f50f606ef4dd19d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765245921241204939,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-545294,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: ff84ec456446fa53da87afee0f115e3f,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04529d5f4a6494a40a53f85e66394b0cccf5a9dfa167ac714917d9c21812746c,PodSandboxId:72521b6a719a676feec078bcc76ce9308b46e162a4af8f254a06efb00c8f8e55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765245921228615658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5986582f384e8b76964852dba738451,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f8692d1-b378-4af3-a1bb-27fda2415424 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                         CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	47025c8e887c9       public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036   4 minutes ago       Running             mysql                     0                   831e808d811a5       mysql-6bcdcbc558-nbwpp                      default
	291f1e39539ff       d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9                                              5 minutes ago       Running             myfrontend                0                   270dd75437f03       sp-pod                                      default
	19ffd46bc92d6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e           5 minutes ago       Exited              mount-munger              0                   4229a9648f038       busybox-mount                               default
	bb11585f8efb9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                              6 minutes ago       Running             coredns                   3                   b0d4c688e00ac       coredns-66bc5c9577-gzjhc                    kube-system
	9a0eb6016c3c8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              6 minutes ago       Running             storage-provisioner       4                   93dd276d617f8       storage-provisioner                         kube-system
	e32e467a00eda       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                              6 minutes ago       Running             kube-proxy                3                   549a6ed604b06       kube-proxy-zwr8l                            kube-system
	0150a2026dd7f       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              6 minutes ago       Running             etcd                      3                   24063f0110f95       etcd-functional-545294                      kube-system
	600d2ecc96b44       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                              6 minutes ago       Running             kube-scheduler            3                   06a38f77b61a4       kube-scheduler-functional-545294            kube-system
	d0561cdb1d940       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                              6 minutes ago       Running             kube-controller-manager   3                   6e0a2877b7250       kube-controller-manager-functional-545294   kube-system
	33019257a9a13       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                              6 minutes ago       Running             kube-apiserver            0                   9bafe39d1fb4e       kube-apiserver-functional-545294            kube-system
	0791a3f540c43       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                              7 minutes ago       Exited              coredns                   2                   b57d9da4d6ff7       coredns-66bc5c9577-gzjhc                    kube-system
	b94476afa73e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              7 minutes ago       Exited              storage-provisioner       3                   7471f0469bc61       storage-provisioner                         kube-system
	1a691542f0f67       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                              7 minutes ago       Exited              kube-proxy                2                   44f15f5505463       kube-proxy-zwr8l                            kube-system
	c3bd02c96f2d3       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                              7 minutes ago       Exited              kube-controller-manager   2                   c48aca2827221       kube-controller-manager-functional-545294   kube-system
	dd5e268ecdaf2       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                              7 minutes ago       Exited              kube-scheduler            2                   c758dc0b99425       kube-scheduler-functional-545294            kube-system
	04529d5f4a649       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              7 minutes ago       Exited              etcd                      2                   72521b6a719a6       etcd-functional-545294                      kube-system
	
	
	==> coredns [0791a3f540c43064eea797dadcd0d0b96a847e6c2b7d19bc79595a6545974c2c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54594 - 51569 "HINFO IN 8372181516051953211.77839456938040332. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.046581635s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bb11585f8efb9685c531517ce6af0bd09674e3b5074d553d77e853bbacd7d794] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35309 - 28422 "HINFO IN 307392218677794923.4186193438894601201. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.078841625s
	
	
	==> describe nodes <==
	Name:               functional-545294
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-545294
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=functional-545294
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_04_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:04:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-545294
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:12:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:09:13 +0000   Tue, 09 Dec 2025 02:04:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:09:13 +0000   Tue, 09 Dec 2025 02:04:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:09:13 +0000   Tue, 09 Dec 2025 02:04:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 02:09:13 +0000   Tue, 09 Dec 2025 02:04:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    functional-545294
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 647139dfa1964d2db5480bfef1b99acc
	  System UUID:                647139df-a196-4d2d-b548-0bfef1b99acc
	  Boot ID:                    37f77225-d1dd-43f8-856c-62cf02b08d24
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-bjmjc                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  default                     hello-node-connect-7d85dfc575-ztccb           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  default                     mysql-6bcdcbc558-nbwpp                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    5m10s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 coredns-66bc5c9577-gzjhc                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m33s
	  kube-system                 etcd-functional-545294                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m39s
	  kube-system                 kube-apiserver-functional-545294              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m42s
	  kube-system                 kube-controller-manager-functional-545294     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 kube-proxy-zwr8l                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m33s
	  kube-system                 kube-scheduler-functional-545294              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m40s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m33s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-rmsbb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8dzft         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m32s                  kube-proxy       
	  Normal  Starting                 6m40s                  kube-proxy       
	  Normal  Starting                 7m25s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m39s                  kubelet          Node functional-545294 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m39s                  kubelet          Node functional-545294 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m39s                  kubelet          Node functional-545294 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m39s                  kubelet          Starting kubelet.
	  Normal  NodeReady                8m38s                  kubelet          Node functional-545294 status is now: NodeReady
	  Normal  RegisteredNode           8m34s                  node-controller  Node functional-545294 event: Registered Node functional-545294 in Controller
	  Normal  NodeHasNoDiskPressure    7m31s (x8 over 7m31s)  kubelet          Node functional-545294 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m31s (x8 over 7m31s)  kubelet          Node functional-545294 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     7m31s (x7 over 7m31s)  kubelet          Node functional-545294 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m24s                  node-controller  Node functional-545294 event: Registered Node functional-545294 in Controller
	  Normal  Starting                 6m47s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m46s (x8 over 6m47s)  kubelet          Node functional-545294 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m46s (x8 over 6m47s)  kubelet          Node functional-545294 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m46s (x7 over 6m47s)  kubelet          Node functional-545294 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m39s                  node-controller  Node functional-545294 event: Registered Node functional-545294 in Controller
	
	
	==> dmesg <==
	[  +1.190372] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.083740] kauditd_printk_skb: 1 callbacks suppressed
	[Dec 9 02:04] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.139051] kauditd_printk_skb: 171 callbacks suppressed
	[  +1.032005] kauditd_printk_skb: 18 callbacks suppressed
	[ +29.006070] kauditd_printk_skb: 220 callbacks suppressed
	[Dec 9 02:05] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.640636] kauditd_printk_skb: 291 callbacks suppressed
	[  +0.430584] kauditd_printk_skb: 222 callbacks suppressed
	[  +4.680345] kauditd_printk_skb: 58 callbacks suppressed
	[  +4.780090] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.109932] kauditd_printk_skb: 12 callbacks suppressed
	[Dec 9 02:06] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.609542] kauditd_printk_skb: 167 callbacks suppressed
	[  +5.081016] kauditd_printk_skb: 133 callbacks suppressed
	[  +2.048397] kauditd_printk_skb: 91 callbacks suppressed
	[  +0.000071] kauditd_printk_skb: 74 callbacks suppressed
	[Dec 9 02:07] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.774352] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.241749] kauditd_printk_skb: 109 callbacks suppressed
	[Dec 9 02:08] kauditd_printk_skb: 74 callbacks suppressed
	[ +14.356751] crun[10477]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +2.479232] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [0150a2026dd7fbd8eef1be4f1149d3e711240afe63836d16882d2d4d2cd9d575] <==
	{"level":"warn","ts":"2025-12-09T02:08:20.002105Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"383.512133ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:08:20.002122Z","caller":"traceutil/trace.go:172","msg":"trace[710112562] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:911; }","duration":"383.543922ms","start":"2025-12-09T02:08:19.618573Z","end":"2025-12-09T02:08:20.002117Z","steps":["trace[710112562] 'agreement among raft nodes before linearized reading'  (duration: 383.487886ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:08:20.002141Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T02:08:19.618557Z","time spent":"383.578633ms","remote":"127.0.0.1:54598","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-12-09T02:08:20.003095Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"177.631085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:08:20.003168Z","caller":"traceutil/trace.go:172","msg":"trace[1020812714] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:912; }","duration":"177.66913ms","start":"2025-12-09T02:08:19.825450Z","end":"2025-12-09T02:08:20.003119Z","steps":["trace[1020812714] 'agreement among raft nodes before linearized reading'  (duration: 177.327113ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:08:20.002889Z","caller":"traceutil/trace.go:172","msg":"trace[1367457961] transaction","detail":"{read_only:false; response_revision:912; number_of_response:1; }","duration":"391.382608ms","start":"2025-12-09T02:08:19.611496Z","end":"2025-12-09T02:08:20.002878Z","steps":["trace[1367457961] 'process raft request'  (duration: 391.019222ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:08:20.005980Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T02:08:19.611479Z","time spent":"394.340715ms","remote":"127.0.0.1:54568","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:911 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-12-09T02:08:20.336700Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.249146ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:08:20.336765Z","caller":"traceutil/trace.go:172","msg":"trace[144813138] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:912; }","duration":"109.377971ms","start":"2025-12-09T02:08:20.227376Z","end":"2025-12-09T02:08:20.336754Z","steps":["trace[144813138] 'range keys from in-memory index tree'  (duration: 109.196157ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:08:20.810490Z","caller":"traceutil/trace.go:172","msg":"trace[1619745664] linearizableReadLoop","detail":"{readStateIndex:1016; appliedIndex:1016; }","duration":"192.187362ms","start":"2025-12-09T02:08:20.618287Z","end":"2025-12-09T02:08:20.810474Z","steps":["trace[1619745664] 'read index received'  (duration: 192.182489ms)","trace[1619745664] 'applied index is now lower than readState.Index'  (duration: 4.268µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-09T02:08:20.810573Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"192.2733ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:08:20.810588Z","caller":"traceutil/trace.go:172","msg":"trace[1911247656] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:912; }","duration":"192.300929ms","start":"2025-12-09T02:08:20.618283Z","end":"2025-12-09T02:08:20.810584Z","steps":["trace[1911247656] 'agreement among raft nodes before linearized reading'  (duration: 192.245332ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:08:22.742838Z","caller":"traceutil/trace.go:172","msg":"trace[837358911] transaction","detail":"{read_only:false; response_revision:927; number_of_response:1; }","duration":"142.237669ms","start":"2025-12-09T02:08:22.600586Z","end":"2025-12-09T02:08:22.742824Z","steps":["trace[837358911] 'process raft request'  (duration: 142.154199ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:08:22.743170Z","caller":"traceutil/trace.go:172","msg":"trace[756897999] linearizableReadLoop","detail":"{readStateIndex:1032; appliedIndex:1033; }","duration":"123.003501ms","start":"2025-12-09T02:08:22.620157Z","end":"2025-12-09T02:08:22.743160Z","steps":["trace[756897999] 'read index received'  (duration: 123.000464ms)","trace[756897999] 'applied index is now lower than readState.Index'  (duration: 2.391µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-09T02:08:22.743236Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.067875ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:08:22.743251Z","caller":"traceutil/trace.go:172","msg":"trace[1578570093] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:927; }","duration":"123.093605ms","start":"2025-12-09T02:08:22.620153Z","end":"2025-12-09T02:08:22.743247Z","steps":["trace[1578570093] 'agreement among raft nodes before linearized reading'  (duration: 123.040471ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:08:26.193519Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"444.5099ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9433522797471565165 > lease_revoke:<id:02ea9b00dbd784e4>","response":"size:29"}
	{"level":"info","ts":"2025-12-09T02:08:26.193632Z","caller":"traceutil/trace.go:172","msg":"trace[1284558279] linearizableReadLoop","detail":"{readStateIndex:1035; appliedIndex:1034; }","duration":"370.030474ms","start":"2025-12-09T02:08:25.823592Z","end":"2025-12-09T02:08:26.193622Z","steps":["trace[1284558279] 'read index received'  (duration: 25.536µs)","trace[1284558279] 'applied index is now lower than readState.Index'  (duration: 370.004319ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-09T02:08:26.193849Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"370.250113ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:08:26.193891Z","caller":"traceutil/trace.go:172","msg":"trace[1953016275] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:928; }","duration":"370.296736ms","start":"2025-12-09T02:08:25.823588Z","end":"2025-12-09T02:08:26.193885Z","steps":["trace[1953016275] 'agreement among raft nodes before linearized reading'  (duration: 370.22523ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:08:26.193938Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T02:08:25.823573Z","time spent":"370.333531ms","remote":"127.0.0.1:54598","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-12-09T02:08:26.194174Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"247.91798ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:08:26.194212Z","caller":"traceutil/trace.go:172","msg":"trace[1807749834] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:928; }","duration":"247.958108ms","start":"2025-12-09T02:08:25.946248Z","end":"2025-12-09T02:08:26.194207Z","steps":["trace[1807749834] 'agreement among raft nodes before linearized reading'  (duration: 247.903203ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:08:26.194400Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.073937ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-12-09T02:08:26.194439Z","caller":"traceutil/trace.go:172","msg":"trace[660959889] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:928; }","duration":"140.114024ms","start":"2025-12-09T02:08:26.054319Z","end":"2025-12-09T02:08:26.194433Z","steps":["trace[660959889] 'agreement among raft nodes before linearized reading'  (duration: 140.020194ms)"],"step_count":1}
	
	
	==> etcd [04529d5f4a6494a40a53f85e66394b0cccf5a9dfa167ac714917d9c21812746c] <==
	{"level":"warn","ts":"2025-12-09T02:05:23.338509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:05:23.353218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:05:23.371748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:05:23.381964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:05:23.399347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:05:23.412155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:05:23.509067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40368","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-09T02:05:48.441858Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-09T02:05:48.443863Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-545294","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.184:2380"],"advertise-client-urls":["https://192.168.39.184:2379"]}
	{"level":"error","ts":"2025-12-09T02:05:48.444832Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-09T02:05:48.538599Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-09T02:05:48.538657Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T02:05:48.538675Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"989272a6374482ea","current-leader-member-id":"989272a6374482ea"}
	{"level":"info","ts":"2025-12-09T02:05:48.538752Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-09T02:05:48.538762Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-09T02:05:48.539259Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.184:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-09T02:05:48.539452Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.184:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-09T02:05:48.539479Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.184:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-09T02:05:48.539583Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-09T02:05:48.539681Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-09T02:05:48.539707Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T02:05:48.542133Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.184:2380"}
	{"level":"error","ts":"2025-12-09T02:05:48.542189Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.184:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T02:05:48.542210Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.184:2380"}
	{"level":"info","ts":"2025-12-09T02:05:48.542215Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-545294","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.184:2380"],"advertise-client-urls":["https://192.168.39.184:2379"]}
	
	
	==> kernel <==
	 02:12:51 up 9 min,  0 users,  load average: 0.14, 0.37, 0.27
	Linux functional-545294 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [33019257a9a137de2f80ac7281f6d3349e5a11dccb52dfd133715a38715898e6] <==
	I1209 02:06:09.203974       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 02:06:09.205599       1 cache.go:39] Caches are synced for autoregister controller
	I1209 02:06:09.797123       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 02:06:09.986149       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 02:06:11.123786       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 02:06:11.196376       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1209 02:06:11.249317       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:06:11.266771       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:06:12.522472       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 02:06:12.772908       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 02:06:12.872500       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 02:06:24.900942       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.135.133"}
	I1209 02:06:30.217791       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.42.231"}
	I1209 02:06:30.606675       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.198.212"}
	I1209 02:07:41.772495       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.110.148.249"}
	E1209 02:07:42.840402       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8441->192.168.39.1:42032: use of closed network connection
	E1209 02:07:50.328340       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8441->192.168.39.1:45564: use of closed network connection
	I1209 02:07:51.350074       1 controller.go:667] quota admission added evaluator for: namespaces
	I1209 02:07:51.768263       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.101.45"}
	I1209 02:07:51.802791       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.67.131"}
	E1209 02:08:26.987317       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8441->192.168.39.1:51836: use of closed network connection
	E1209 02:08:28.054828       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8441->192.168.39.1:51860: use of closed network connection
	E1209 02:08:29.468371       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8441->192.168.39.1:51870: use of closed network connection
	E1209 02:08:31.316636       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8441->192.168.39.1:51892: use of closed network connection
	E1209 02:08:35.158814       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8441->192.168.39.1:54452: use of closed network connection
	
	
	==> kube-controller-manager [c3bd02c96f2d3339fa9d3bd443fc7063b2baf4b197fa5941beab45b59e9f1e71] <==
	I1209 02:05:27.470621       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1209 02:05:27.471242       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-545294"
	I1209 02:05:27.471430       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1209 02:05:27.474344       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1209 02:05:27.477438       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1209 02:05:27.477546       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1209 02:05:27.481982       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1209 02:05:27.485874       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1209 02:05:27.493943       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1209 02:05:27.496206       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1209 02:05:27.501290       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1209 02:05:27.502862       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1209 02:05:27.503047       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1209 02:05:27.503125       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1209 02:05:27.504186       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1209 02:05:27.509366       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1209 02:05:27.514400       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1209 02:05:27.516437       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 02:05:27.518312       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1209 02:05:27.522237       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1209 02:05:27.568197       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1209 02:05:27.694771       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 02:05:27.702894       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 02:05:27.703086       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1209 02:05:27.703118       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [d0561cdb1d940f72d9593f84899a0b04ab988d3ad75efe7b4ce43e03d904a29d] <==
	I1209 02:06:12.519650       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1209 02:06:12.519669       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1209 02:06:12.519945       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1209 02:06:12.522083       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1209 02:06:12.525421       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1209 02:06:12.526643       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1209 02:06:12.533477       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 02:06:12.533442       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1209 02:06:12.536732       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 02:06:12.539185       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1209 02:06:12.539288       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1209 02:06:12.539422       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1209 02:06:12.539516       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-545294"
	I1209 02:06:12.539550       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1209 02:06:12.542145       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1209 02:06:12.545166       1 shared_informer.go:356] "Caches are synced" controller="job"
	E1209 02:07:51.480180       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:07:51.487321       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:07:51.509273       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:07:51.515375       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:07:51.542183       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:07:51.545675       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:07:51.572173       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:07:51.572206       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:07:51.584118       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [1a691542f0f67679391b917d59ffd9d5fe7c07b0b46dab363cd0d823a78d97cc] <==
	I1209 02:05:26.320349       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 02:05:26.421246       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 02:05:26.421613       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.184"]
	E1209 02:05:26.422529       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:05:26.525897       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1209 02:05:26.525977       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 02:05:26.526602       1 server_linux.go:132] "Using iptables Proxier"
	I1209 02:05:26.562347       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:05:26.563653       1 server.go:527] "Version info" version="v1.34.2"
	I1209 02:05:26.563962       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:05:26.579817       1 config.go:200] "Starting service config controller"
	I1209 02:05:26.580206       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:05:26.580444       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:05:26.580517       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:05:26.583355       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:05:26.583455       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:05:26.590933       1 config.go:309] "Starting node config controller"
	I1209 02:05:26.591060       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:05:26.591069       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:05:26.680620       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 02:05:26.680658       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1209 02:05:26.683597       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [e32e467a00eda98540f5fcd1ff5ac24480720d3fe71af69bd08b899810f97631] <==
	I1209 02:06:11.104122       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 02:06:11.205083       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 02:06:11.205110       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.184"]
	E1209 02:06:11.205200       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:06:11.337784       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1209 02:06:11.337915       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 02:06:11.337954       1 server_linux.go:132] "Using iptables Proxier"
	I1209 02:06:11.361533       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:06:11.361939       1 server.go:527] "Version info" version="v1.34.2"
	I1209 02:06:11.363211       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:06:11.379755       1 config.go:200] "Starting service config controller"
	I1209 02:06:11.379769       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:06:11.379788       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:06:11.379797       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:06:11.379838       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:06:11.379843       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:06:11.380923       1 config.go:309] "Starting node config controller"
	I1209 02:06:11.385662       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:06:11.385780       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:06:11.481250       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 02:06:11.481382       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 02:06:11.484130       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [600d2ecc96b44f7021b08ce2b5646ec800acfbb9b54d330fe059d5883db48b2e] <==
	I1209 02:06:06.775309       1 serving.go:386] Generated self-signed cert in-memory
	W1209 02:06:09.046112       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 02:06:09.046155       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 02:06:09.046165       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 02:06:09.046171       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 02:06:09.110256       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1209 02:06:09.110354       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:06:09.116483       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:06:09.116532       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:06:09.117060       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 02:06:09.117800       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 02:06:09.217073       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [dd5e268ecdaf2fde98e798be33bab31cafba814db74814a43faf6667e7ce4f8c] <==
	I1209 02:05:23.247182       1 serving.go:386] Generated self-signed cert in-memory
	W1209 02:05:24.076293       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 02:05:24.076332       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 02:05:24.076346       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 02:05:24.076352       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 02:05:24.177875       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1209 02:05:24.181087       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:05:24.186865       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:05:24.186913       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:05:24.188170       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 02:05:24.189648       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 02:05:24.287594       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:05:48.472329       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:05:48.472480       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1209 02:05:48.472497       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1209 02:05:48.472523       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1209 02:05:48.472572       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1209 02:05:48.472589       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 09 02:11:45 functional-545294 kubelet[6253]: E1209 02:11:45.017137    6253 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765246305016627687 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:240114} inodes_used:{value:104}}"
	Dec 09 02:11:48 functional-545294 kubelet[6253]: E1209 02:11:48.798847    6253 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8dzft" podUID="dbd91bdc-46aa-41fc-a596-f6f221db50ff"
	Dec 09 02:11:55 functional-545294 kubelet[6253]: E1209 02:11:55.019911    6253 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765246315019337217 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:240114} inodes_used:{value:104}}"
	Dec 09 02:11:55 functional-545294 kubelet[6253]: E1209 02:11:55.020035    6253 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765246315019337217 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:240114} inodes_used:{value:104}}"
	Dec 09 02:12:04 functional-545294 kubelet[6253]: E1209 02:12:04.891961    6253 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod17f41900-ed85-412a-b753-83ab0612a0d0/crio-44f15f5505463fe9d6adebc9fdd36d278e59ed90cd5ac660218c41fafdef026d: Error finding container 44f15f5505463fe9d6adebc9fdd36d278e59ed90cd5ac660218c41fafdef026d: Status 404 returned error can't find the container with id 44f15f5505463fe9d6adebc9fdd36d278e59ed90cd5ac660218c41fafdef026d
	Dec 09 02:12:04 functional-545294 kubelet[6253]: E1209 02:12:04.892367    6253 manager.go:1116] Failed to create existing container: /kubepods/burstable/poda5986582f384e8b76964852dba738451/crio-72521b6a719a676feec078bcc76ce9308b46e162a4af8f254a06efb00c8f8e55: Error finding container 72521b6a719a676feec078bcc76ce9308b46e162a4af8f254a06efb00c8f8e55: Status 404 returned error can't find the container with id 72521b6a719a676feec078bcc76ce9308b46e162a4af8f254a06efb00c8f8e55
	Dec 09 02:12:04 functional-545294 kubelet[6253]: E1209 02:12:04.892682    6253 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod7da85099-845e-43c0-abe3-694b2e59c644/crio-7471f0469bc616aead498ee3784d1390a01a97599c3d5a5318444e7deaa41c20: Error finding container 7471f0469bc616aead498ee3784d1390a01a97599c3d5a5318444e7deaa41c20: Status 404 returned error can't find the container with id 7471f0469bc616aead498ee3784d1390a01a97599c3d5a5318444e7deaa41c20
	Dec 09 02:12:04 functional-545294 kubelet[6253]: E1209 02:12:04.893076    6253 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod66ccb903-d2f9-4e8c-b9fe-25384e736e56/crio-b57d9da4d6ff79239594fec3344c2d956d8f4be7ce4466409183d6f31fe170ab: Error finding container b57d9da4d6ff79239594fec3344c2d956d8f4be7ce4466409183d6f31fe170ab: Status 404 returned error can't find the container with id b57d9da4d6ff79239594fec3344c2d956d8f4be7ce4466409183d6f31fe170ab
	Dec 09 02:12:04 functional-545294 kubelet[6253]: E1209 02:12:04.893572    6253 manager.go:1116] Failed to create existing container: /kubepods/burstable/podff84ec456446fa53da87afee0f115e3f/crio-c758dc0b99425c52831bdd825fbfbebf76a7560fb6bd8235f50f606ef4dd19d6: Error finding container c758dc0b99425c52831bdd825fbfbebf76a7560fb6bd8235f50f606ef4dd19d6: Status 404 returned error can't find the container with id c758dc0b99425c52831bdd825fbfbebf76a7560fb6bd8235f50f606ef4dd19d6
	Dec 09 02:12:04 functional-545294 kubelet[6253]: E1209 02:12:04.893926    6253 manager.go:1116] Failed to create existing container: /kubepods/burstable/poddfde71ed46fcf3e1413b4854831f8db9/crio-c48aca2827221a2e8fa145591415bf88bceb762f5b087eed3f0844db1fdbb470: Error finding container c48aca2827221a2e8fa145591415bf88bceb762f5b087eed3f0844db1fdbb470: Status 404 returned error can't find the container with id c48aca2827221a2e8fa145591415bf88bceb762f5b087eed3f0844db1fdbb470
	Dec 09 02:12:05 functional-545294 kubelet[6253]: E1209 02:12:05.023050    6253 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765246325022371372 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:240114} inodes_used:{value:104}}"
	Dec 09 02:12:05 functional-545294 kubelet[6253]: E1209 02:12:05.023091    6253 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765246325022371372 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:240114} inodes_used:{value:104}}"
	Dec 09 02:12:15 functional-545294 kubelet[6253]: E1209 02:12:15.025739    6253 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765246335025341044 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:240114} inodes_used:{value:104}}"
	Dec 09 02:12:15 functional-545294 kubelet[6253]: E1209 02:12:15.025764    6253 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765246335025341044 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:240114} inodes_used:{value:104}}"
	Dec 09 02:12:25 functional-545294 kubelet[6253]: E1209 02:12:25.027734    6253 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765246345027275806 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:240114} inodes_used:{value:104}}"
	Dec 09 02:12:25 functional-545294 kubelet[6253]: E1209 02:12:25.027963    6253 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765246345027275806 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:240114} inodes_used:{value:104}}"
	Dec 09 02:12:35 functional-545294 kubelet[6253]: E1209 02:12:35.030935    6253 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765246355030533806 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:240114} inodes_used:{value:104}}"
	Dec 09 02:12:35 functional-545294 kubelet[6253]: E1209 02:12:35.030980    6253 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765246355030533806 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:240114} inodes_used:{value:104}}"
	Dec 09 02:12:36 functional-545294 kubelet[6253]: E1209 02:12:36.173405    6253 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 09 02:12:36 functional-545294 kubelet[6253]: E1209 02:12:36.173458    6253 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 09 02:12:36 functional-545294 kubelet[6253]: E1209 02:12:36.173619    6253 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-rmsbb_kubernetes-dashboard(01ef0bff-1c06-4278-a785-c9f4e2ddb30c): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 09 02:12:36 functional-545294 kubelet[6253]: E1209 02:12:36.173652    6253 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-rmsbb" podUID="01ef0bff-1c06-4278-a785-c9f4e2ddb30c"
	Dec 09 02:12:45 functional-545294 kubelet[6253]: E1209 02:12:45.034225    6253 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765246365033391203 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:240114} inodes_used:{value:104}}"
	Dec 09 02:12:45 functional-545294 kubelet[6253]: E1209 02:12:45.034260    6253 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765246365033391203 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:240114} inodes_used:{value:104}}"
	Dec 09 02:12:48 functional-545294 kubelet[6253]: E1209 02:12:48.796432    6253 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-rmsbb" podUID="01ef0bff-1c06-4278-a785-c9f4e2ddb30c"
	
	
	==> storage-provisioner [9a0eb6016c3c8701b48b1c0daf6bd0ec2d9b246d60f5a2d55046e6c67d5e54cd] <==
	W1209 02:12:27.729473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:29.732962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:29.743114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:31.747363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:31.753821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:33.758181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:33.768615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:35.772207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:35.783219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:37.786861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:37.793058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:39.797119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:39.803439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:41.807660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:41.812670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:43.815664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:43.821310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:45.826140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:45.839584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:47.844111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:47.850945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:49.856478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:49.867927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:51.873550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:12:51.889490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b94476afa73e7f5b21730d0829202d1d6524a0e811201d33665db75577741924] <==
	I1209 02:05:26.093774       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 02:05:26.149266       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 02:05:26.149542       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1209 02:05:26.185123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:05:29.642726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:05:33.903543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:05:37.503047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:05:40.556654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:05:43.581531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:05:43.595885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:05:43.596320       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 02:05:43.596559       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-545294_5ba57f46-08dc-4cc2-9a36-72607c17a622!
	I1209 02:05:43.597686       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e6c0a043-6a90-4a97-a10c-da4de6c83992", APIVersion:"v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-545294_5ba57f46-08dc-4cc2-9a36-72607c17a622 became leader
	W1209 02:05:43.609289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:05:43.626360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:05:43.697697       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-545294_5ba57f46-08dc-4cc2-9a36-72607c17a622!
	W1209 02:05:45.630599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:05:45.644234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:05:47.648598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:05:47.654873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-545294 -n functional-545294
helpers_test.go:269: (dbg) Run:  kubectl --context functional-545294 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-bjmjc hello-node-connect-7d85dfc575-ztccb dashboard-metrics-scraper-77bf4d6c4c-rmsbb kubernetes-dashboard-855c9754f9-8dzft
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-545294 describe pod busybox-mount hello-node-75c85bcc94-bjmjc hello-node-connect-7d85dfc575-ztccb dashboard-metrics-scraper-77bf4d6c4c-rmsbb kubernetes-dashboard-855c9754f9-8dzft
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-545294 describe pod busybox-mount hello-node-75c85bcc94-bjmjc hello-node-connect-7d85dfc575-ztccb dashboard-metrics-scraper-77bf4d6c4c-rmsbb kubernetes-dashboard-855c9754f9-8dzft: exit status 1 (104.260094ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-545294/192.168.39.184
	Start Time:       Tue, 09 Dec 2025 02:06:37 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://19ffd46bc92d655ae18972da483ae0b30262242aca0b5bbbf3d3e198e2b48fbc
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 09 Dec 2025 02:07:33 +0000
	      Finished:     Tue, 09 Dec 2025 02:07:33 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zvlb9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zvlb9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m15s  default-scheduler  Successfully assigned default/busybox-mount to functional-545294
	  Normal  Pulling    6m15s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m19s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.243s (55.843s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m19s  kubelet            Created container: mount-munger
	  Normal  Started    5m19s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-bjmjc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-545294/192.168.39.184
	Start Time:       Tue, 09 Dec 2025 02:06:30 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8b2td (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8b2td:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m22s                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-bjmjc to functional-545294
	  Warning  Failed     4m1s                  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     107s (x2 over 5m21s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     107s (x3 over 5m21s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    72s (x5 over 5m21s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     72s (x5 over 5m21s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    58s (x4 over 6m21s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-ztccb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-545294/192.168.39.184
	Start Time:       Tue, 09 Dec 2025 02:06:30 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fwkqd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-fwkqd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m22s                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ztccb to functional-545294
	  Warning  Failed     5m51s                  kubelet            Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m46s                  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m17s (x3 over 5m51s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m17s                  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    99s (x5 over 5m51s)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     99s (x5 over 5m51s)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    84s (x4 over 6m22s)    kubelet            Pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-rmsbb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-8dzft" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-545294 describe pod busybox-mount hello-node-75c85bcc94-bjmjc hello-node-connect-7d85dfc575-ztccb dashboard-metrics-scraper-77bf4d6c4c-rmsbb kubernetes-dashboard-855c9754f9-8dzft: exit status 1
E1209 02:13:08.553613  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/DashboardCmd (302.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-545294 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-545294 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-ztccb" [81ac73b0-bd30-4571-9cf7-60cac343d17f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-545294 -n functional-545294
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-09 02:16:30.483182323 +0000 UTC m=+1254.076802414
functional_test.go:1645: (dbg) Run:  kubectl --context functional-545294 describe po hello-node-connect-7d85dfc575-ztccb -n default
functional_test.go:1645: (dbg) kubectl --context functional-545294 describe po hello-node-connect-7d85dfc575-ztccb -n default:
Name:             hello-node-connect-7d85dfc575-ztccb
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-545294/192.168.39.184
Start Time:       Tue, 09 Dec 2025 02:06:30 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fwkqd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-fwkqd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ztccb to functional-545294
Warning  Failed     9m29s                 kubelet            Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     5m55s                 kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    116s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     78s (x5 over 9m29s)   kubelet            Error: ErrImagePull
Warning  Failed     78s (x3 over 8m24s)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     13s (x16 over 9m29s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    2s (x17 over 9m29s)   kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-545294 logs hello-node-connect-7d85dfc575-ztccb -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-545294 logs hello-node-connect-7d85dfc575-ztccb -n default: exit status 1 (91.685108ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-ztccb" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-545294 logs hello-node-connect-7d85dfc575-ztccb -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-545294 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-ztccb
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-545294/192.168.39.184
Start Time:       Tue, 09 Dec 2025 02:06:30 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fwkqd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-fwkqd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ztccb to functional-545294
Warning  Failed     9m29s                 kubelet            Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     5m55s                 kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    116s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     78s (x5 over 9m29s)   kubelet            Error: ErrImagePull
Warning  Failed     78s (x3 over 8m24s)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     13s (x16 over 9m29s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    2s (x17 over 9m29s)   kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-545294 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-545294 logs -l app=hello-node-connect: exit status 1 (77.647798ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-ztccb" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-545294 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-545294 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.109.42.231
IPs:                      10.109.42.231
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30749/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-545294 -n functional-545294
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-545294 logs -n 25: (1.531155332s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                  ARGS                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-545294 ssh findmnt -T /mount1                                                                │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │ 09 Dec 25 02:07 UTC │
	│ ssh            │ functional-545294 ssh findmnt -T /mount2                                                                │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │ 09 Dec 25 02:07 UTC │
	│ ssh            │ functional-545294 ssh findmnt -T /mount3                                                                │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │ 09 Dec 25 02:07 UTC │
	│ mount          │ -p functional-545294 --kill=true                                                                        │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │                     │
	│ ssh            │ functional-545294 ssh sudo cat /etc/ssl/certs/258854.pem                                                │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │ 09 Dec 25 02:07 UTC │
	│ ssh            │ functional-545294 ssh sudo cat /usr/share/ca-certificates/258854.pem                                    │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │ 09 Dec 25 02:07 UTC │
	│ ssh            │ functional-545294 ssh sudo cat /etc/ssl/certs/51391683.0                                                │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │ 09 Dec 25 02:07 UTC │
	│ ssh            │ functional-545294 ssh sudo cat /etc/ssl/certs/2588542.pem                                               │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │ 09 Dec 25 02:07 UTC │
	│ ssh            │ functional-545294 ssh sudo cat /usr/share/ca-certificates/2588542.pem                                   │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │ 09 Dec 25 02:07 UTC │
	│ ssh            │ functional-545294 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │ 09 Dec 25 02:07 UTC │
	│ start          │ -p functional-545294 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │                     │
	│ start          │ -p functional-545294 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio           │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │                     │
	│ ssh            │ functional-545294 ssh sudo cat /etc/test/nested/copy/258854/hosts                                       │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │ 09 Dec 25 02:07 UTC │
	│ dashboard      │ --url --port 36195 -p functional-545294 --alsologtostderr -v=1                                          │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:07 UTC │                     │
	│ image          │ functional-545294 image ls --format short --alsologtostderr                                             │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:08 UTC │ 09 Dec 25 02:08 UTC │
	│ image          │ functional-545294 image ls --format yaml --alsologtostderr                                              │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:08 UTC │ 09 Dec 25 02:08 UTC │
	│ ssh            │ functional-545294 ssh pgrep buildkitd                                                                   │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:08 UTC │                     │
	│ image          │ functional-545294 image build -t localhost/my-image:functional-545294 testdata/build --alsologtostderr  │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:08 UTC │ 09 Dec 25 02:08 UTC │
	│ image          │ functional-545294 image ls                                                                              │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:08 UTC │ 09 Dec 25 02:08 UTC │
	│ image          │ functional-545294 image ls --format json --alsologtostderr                                              │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:08 UTC │ 09 Dec 25 02:08 UTC │
	│ image          │ functional-545294 image ls --format table --alsologtostderr                                             │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:08 UTC │ 09 Dec 25 02:08 UTC │
	│ update-context │ functional-545294 update-context --alsologtostderr -v=2                                                 │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:08 UTC │ 09 Dec 25 02:08 UTC │
	│ update-context │ functional-545294 update-context --alsologtostderr -v=2                                                 │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:08 UTC │ 09 Dec 25 02:08 UTC │
	│ update-context │ functional-545294 update-context --alsologtostderr -v=2                                                 │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:08 UTC │ 09 Dec 25 02:08 UTC │
	│ service        │ functional-545294 service list                                                                          │ functional-545294 │ jenkins │ v1.37.0 │ 09 Dec 25 02:16 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:07:41
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:07:41.399695  265360 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:07:41.400095  265360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:07:41.400115  265360 out.go:374] Setting ErrFile to fd 2...
	I1209 02:07:41.400123  265360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:07:41.400441  265360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 02:07:41.400982  265360 out.go:368] Setting JSON to false
	I1209 02:07:41.402038  265360 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28211,"bootTime":1765217850,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:07:41.402104  265360 start.go:143] virtualization: kvm guest
	I1209 02:07:41.404511  265360 out.go:179] * [functional-545294] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:07:41.406041  265360 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:07:41.406047  265360 notify.go:221] Checking for updates...
	I1209 02:07:41.409129  265360 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:07:41.413797  265360 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 02:07:41.415477  265360 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	I1209 02:07:41.416913  265360 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:07:41.418187  265360 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:07:41.419756  265360 config.go:182] Loaded profile config "functional-545294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:07:41.420298  265360 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:07:41.454123  265360 out.go:179] * Using the kvm2 driver based on existing profile
	I1209 02:07:41.455559  265360 start.go:309] selected driver: kvm2
	I1209 02:07:41.455578  265360 start.go:927] validating driver "kvm2" against &{Name:functional-545294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-545294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:07:41.455788  265360 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:07:41.456977  265360 cni.go:84] Creating CNI manager for ""
	I1209 02:07:41.457046  265360 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 02:07:41.457106  265360 start.go:353] cluster config:
	{Name:functional-545294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-545294 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:07:41.458780  265360 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.625480651Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765246591625453432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:240114,},InodesUsed:&UInt64Value{Value:104,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=789bab0d-89f5-4c1d-adc7-b223a5ba0bd9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.626693008Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=680596a2-0c96-4a2d-8c36-fbbc09b96f34 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.626825694Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=680596a2-0c96-4a2d-8c36-fbbc09b96f34 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.627260135Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47025c8e887c9b1c3ff3b4d7ad309846f91891a5cfc411e153c7bea0e23bdd24,PodSandboxId:831e808d811a5b3edbc977765dfe4d922f201a07cb785f406d402f2e2138e496,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765246101229560357,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-nbwpp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0f362234-70c0-47ff-afab-c6cb6c695ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291f1e39539ff7d651b830e7e042f1b9fcdb535d35c3bb69037513f9a244efe0,PodSandboxId:270dd75437f03d4bdcdca5fab35a5917a40402319586d912142502218051eb5a,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765246064745716091,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c8feca44-44ed-4eb4-8817-2e317d08cf50,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ffd46bc92d655ae18972da483ae0b30262242aca0b5bbbf3d3e198e2b48fbc,PodSandboxId:4229a9648f0388b0c24f25f7aa77aacf8bd14e2cd4c7af11f8e5436d683ca9b2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765246053617985118,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ebc811a7-8de7-4db7-ab8a-6466ab5be638,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb11585f8efb9685c531517ce6af0bd09674e3b5074d553d77e853bbacd7d794,PodSandboxId:b0d4c688e00ac0c63523f99e02d00e98818433a710144919e971d2dfc83f7472,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765245970662252981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gzjhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ccb903-d2f9-4e8c-b9fe-25384e736e56,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0eb6016c3c8701b48b1c0daf6bd0ec2d9b246d60f5a2d55046e6c67d5e54cd,PodSandboxId:93dd276d617f88e3fcb4ba99fec9ce51d91246375b2553a08d899efea75fb28f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CO
NTAINER_RUNNING,CreatedAt:1765245970630464144,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da85099-845e-43c0-abe3-694b2e59c644,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32e467a00eda98540f5fcd1ff5ac24480720d3fe71af69bd08b899810f97631,PodSandboxId:549a6ed604b063b938220bc6c839a85ec7d5c9fa23b1991faf03dd3a416180ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,Creat
edAt:1765245970581962315,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zwr8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17f41900-ed85-412a-b753-83ab0612a0d0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600d2ecc96b44f7021b08ce2b5646ec800acfbb9b54d330fe059d5883db48b2e,PodSandboxId:06a38f77b61a47f375fe95f6b21b3046fbdf10582e34e2a4260019e558d8e573,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765245965739866434,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff84ec456446fa53da87afee0f115e3f,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0561cdb1d940f72d9593f84899a0b04ab988d3ad75efe7b4ce43e03d904a29d,PodSandboxId:6e0a2877b725072401fab9146721d08dc56a4c4a81b3fbda297cc79dd8a963f1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8
bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765245965724590043,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfde71ed46fcf3e1413b4854831f8db9,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0150a2026dd7fbd8eef1be4f1149d3e711240afe63836d16882d2d4d2cd9d575,PodSandboxId:24063f0110f95e10240d247cba620e91ca39c134e08071df109b36ed7d5d6662,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581
b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765245965754099763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5986582f384e8b76964852dba738451,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33019257a9a137de2f80ac7281f6d3349e5a11dccb52dfd133715a38715898e6,PodSandboxId:9bafe39d1fb4e9eca47c6e6b77ed0e9de1020797cc92de67eac433fe3f043fcc,Metadata:&ContainerMetada
ta{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765245965692497170,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d49e5f2eca05156c73661728e6bef94,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0791a3f540c43064eea797dadcd0d0b96a847e6c2b7d19bc79595a654597
4c2c,PodSandboxId:b57d9da4d6ff79239594fec3344c2d956d8f4be7ce4466409183d6f31fe170ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765245925898872103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gzjhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ccb903-d2f9-4e8c-b9fe-25384e736e56,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness
-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a691542f0f67679391b917d59ffd9d5fe7c07b0b46dab363cd0d823a78d97cc,PodSandboxId:44f15f5505463fe9d6adebc9fdd36d278e59ed90cd5ac660218c41fafdef026d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765245925874479288,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zwr8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17f41900-ed85-412a-b753-83ab0612a0d0,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b94476afa73e7f5b21730d0829202d1d6524a0e811201d33665db75577741924,PodSandboxId:7471f0469bc616aead498ee3784d1390a01a97599c3d5a5318444e7deaa41c20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765245925885367398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da85099-845e-43c0-abe3-694b2e59c644,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bd02c96f2d3339fa9d3bd443fc7063b2baf4b197fa5941beab45b59e9f1e71,PodSandboxId:c48aca2827221a2e8fa145591415bf88bceb762f5b087eed3f0844db1fdbb470,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765245921265200501,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfde71ed46fcf3e1413b4854831f8db9,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5e268ecdaf2fde98e798be33bab31cafba814db74814a43faf6667e7ce4f8c,PodSandboxId:c758dc0b99425c52831bdd825fbfbebf76a7560fb6bd8235f50f606ef4dd19d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765245921241204939,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-545294,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: ff84ec456446fa53da87afee0f115e3f,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04529d5f4a6494a40a53f85e66394b0cccf5a9dfa167ac714917d9c21812746c,PodSandboxId:72521b6a719a676feec078bcc76ce9308b46e162a4af8f254a06efb00c8f8e55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765245921228615658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5986582f384e8b76964852dba738451,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=680596a2-0c96-4a2d-8c36-fbbc09b96f34 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.675167046Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad868343-8a6e-42bd-bbfe-1f7236b72f37 name=/runtime.v1.RuntimeService/Version
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.675540042Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad868343-8a6e-42bd-bbfe-1f7236b72f37 name=/runtime.v1.RuntimeService/Version
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.677483476Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e718738b-8a19-4c93-80bc-a04d775d77bf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.678542988Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765246591678513714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:240114,},InodesUsed:&UInt64Value{Value:104,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e718738b-8a19-4c93-80bc-a04d775d77bf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.679701229Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac4679b3-a335-4b1b-ab50-fdfcbd9e6df5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.679900844Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac4679b3-a335-4b1b-ab50-fdfcbd9e6df5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.680357041Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47025c8e887c9b1c3ff3b4d7ad309846f91891a5cfc411e153c7bea0e23bdd24,PodSandboxId:831e808d811a5b3edbc977765dfe4d922f201a07cb785f406d402f2e2138e496,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765246101229560357,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-nbwpp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0f362234-70c0-47ff-afab-c6cb6c695ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291f1e39539ff7d651b830e7e042f1b9fcdb535d35c3bb69037513f9a244efe0,PodSandboxId:270dd75437f03d4bdcdca5fab35a5917a40402319586d912142502218051eb5a,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765246064745716091,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c8feca44-44ed-4eb4-8817-2e317d08cf50,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ffd46bc92d655ae18972da483ae0b30262242aca0b5bbbf3d3e198e2b48fbc,PodSandboxId:4229a9648f0388b0c24f25f7aa77aacf8bd14e2cd4c7af11f8e5436d683ca9b2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765246053617985118,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ebc811a7-8de7-4db7-ab8a-6466ab5be638,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb11585f8efb9685c531517ce6af0bd09674e3b5074d553d77e853bbacd7d794,PodSandboxId:b0d4c688e00ac0c63523f99e02d00e98818433a710144919e971d2dfc83f7472,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765245970662252981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gzjhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ccb903-d2f9-4e8c-b9fe-25384e736e56,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0eb6016c3c8701b48b1c0daf6bd0ec2d9b246d60f5a2d55046e6c67d5e54cd,PodSandboxId:93dd276d617f88e3fcb4ba99fec9ce51d91246375b2553a08d899efea75fb28f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CO
NTAINER_RUNNING,CreatedAt:1765245970630464144,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da85099-845e-43c0-abe3-694b2e59c644,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32e467a00eda98540f5fcd1ff5ac24480720d3fe71af69bd08b899810f97631,PodSandboxId:549a6ed604b063b938220bc6c839a85ec7d5c9fa23b1991faf03dd3a416180ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,Creat
edAt:1765245970581962315,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zwr8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17f41900-ed85-412a-b753-83ab0612a0d0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600d2ecc96b44f7021b08ce2b5646ec800acfbb9b54d330fe059d5883db48b2e,PodSandboxId:06a38f77b61a47f375fe95f6b21b3046fbdf10582e34e2a4260019e558d8e573,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765245965739866434,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff84ec456446fa53da87afee0f115e3f,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0561cdb1d940f72d9593f84899a0b04ab988d3ad75efe7b4ce43e03d904a29d,PodSandboxId:6e0a2877b725072401fab9146721d08dc56a4c4a81b3fbda297cc79dd8a963f1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8
bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765245965724590043,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfde71ed46fcf3e1413b4854831f8db9,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0150a2026dd7fbd8eef1be4f1149d3e711240afe63836d16882d2d4d2cd9d575,PodSandboxId:24063f0110f95e10240d247cba620e91ca39c134e08071df109b36ed7d5d6662,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581
b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765245965754099763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5986582f384e8b76964852dba738451,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33019257a9a137de2f80ac7281f6d3349e5a11dccb52dfd133715a38715898e6,PodSandboxId:9bafe39d1fb4e9eca47c6e6b77ed0e9de1020797cc92de67eac433fe3f043fcc,Metadata:&ContainerMetada
ta{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765245965692497170,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d49e5f2eca05156c73661728e6bef94,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0791a3f540c43064eea797dadcd0d0b96a847e6c2b7d19bc79595a654597
4c2c,PodSandboxId:b57d9da4d6ff79239594fec3344c2d956d8f4be7ce4466409183d6f31fe170ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765245925898872103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gzjhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ccb903-d2f9-4e8c-b9fe-25384e736e56,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness
-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a691542f0f67679391b917d59ffd9d5fe7c07b0b46dab363cd0d823a78d97cc,PodSandboxId:44f15f5505463fe9d6adebc9fdd36d278e59ed90cd5ac660218c41fafdef026d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765245925874479288,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zwr8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17f41900-ed85-412a-b753-83ab0612a0d0,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b94476afa73e7f5b21730d0829202d1d6524a0e811201d33665db75577741924,PodSandboxId:7471f0469bc616aead498ee3784d1390a01a97599c3d5a5318444e7deaa41c20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765245925885367398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da85099-845e-43c0-abe3-694b2e59c644,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bd02c96f2d3339fa9d3bd443fc7063b2baf4b197fa5941beab45b59e9f1e71,PodSandboxId:c48aca2827221a2e8fa145591415bf88bceb762f5b087eed3f0844db1fdbb470,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765245921265200501,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfde71ed46fcf3e1413b4854831f8db9,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5e268ecdaf2fde98e798be33bab31cafba814db74814a43faf6667e7ce4f8c,PodSandboxId:c758dc0b99425c52831bdd825fbfbebf76a7560fb6bd8235f50f606ef4dd19d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765245921241204939,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-545294,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: ff84ec456446fa53da87afee0f115e3f,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04529d5f4a6494a40a53f85e66394b0cccf5a9dfa167ac714917d9c21812746c,PodSandboxId:72521b6a719a676feec078bcc76ce9308b46e162a4af8f254a06efb00c8f8e55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765245921228615658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5986582f384e8b76964852dba738451,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac4679b3-a335-4b1b-ab50-fdfcbd9e6df5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.714665743Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a323f3e7-5461-49fc-906f-a2037cdeb04a name=/runtime.v1.RuntimeService/Version
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.714747618Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a323f3e7-5461-49fc-906f-a2037cdeb04a name=/runtime.v1.RuntimeService/Version
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.716622500Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=248fc65c-6116-4c0a-a8bd-62657b5154b1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.718418140Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765246591718327546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:240114,},InodesUsed:&UInt64Value{Value:104,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=248fc65c-6116-4c0a-a8bd-62657b5154b1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.720321689Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89be04b4-3fac-42dc-a8dc-9f138aa172a2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.720728524Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89be04b4-3fac-42dc-a8dc-9f138aa172a2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.721657205Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47025c8e887c9b1c3ff3b4d7ad309846f91891a5cfc411e153c7bea0e23bdd24,PodSandboxId:831e808d811a5b3edbc977765dfe4d922f201a07cb785f406d402f2e2138e496,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765246101229560357,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-nbwpp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0f362234-70c0-47ff-afab-c6cb6c695ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291f1e39539ff7d651b830e7e042f1b9fcdb535d35c3bb69037513f9a244efe0,PodSandboxId:270dd75437f03d4bdcdca5fab35a5917a40402319586d912142502218051eb5a,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765246064745716091,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c8feca44-44ed-4eb4-8817-2e317d08cf50,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ffd46bc92d655ae18972da483ae0b30262242aca0b5bbbf3d3e198e2b48fbc,PodSandboxId:4229a9648f0388b0c24f25f7aa77aacf8bd14e2cd4c7af11f8e5436d683ca9b2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765246053617985118,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ebc811a7-8de7-4db7-ab8a-6466ab5be638,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb11585f8efb9685c531517ce6af0bd09674e3b5074d553d77e853bbacd7d794,PodSandboxId:b0d4c688e00ac0c63523f99e02d00e98818433a710144919e971d2dfc83f7472,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765245970662252981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gzjhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ccb903-d2f9-4e8c-b9fe-25384e736e56,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0eb6016c3c8701b48b1c0daf6bd0ec2d9b246d60f5a2d55046e6c67d5e54cd,PodSandboxId:93dd276d617f88e3fcb4ba99fec9ce51d91246375b2553a08d899efea75fb28f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CO
NTAINER_RUNNING,CreatedAt:1765245970630464144,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da85099-845e-43c0-abe3-694b2e59c644,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32e467a00eda98540f5fcd1ff5ac24480720d3fe71af69bd08b899810f97631,PodSandboxId:549a6ed604b063b938220bc6c839a85ec7d5c9fa23b1991faf03dd3a416180ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,Creat
edAt:1765245970581962315,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zwr8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17f41900-ed85-412a-b753-83ab0612a0d0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600d2ecc96b44f7021b08ce2b5646ec800acfbb9b54d330fe059d5883db48b2e,PodSandboxId:06a38f77b61a47f375fe95f6b21b3046fbdf10582e34e2a4260019e558d8e573,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765245965739866434,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff84ec456446fa53da87afee0f115e3f,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0561cdb1d940f72d9593f84899a0b04ab988d3ad75efe7b4ce43e03d904a29d,PodSandboxId:6e0a2877b725072401fab9146721d08dc56a4c4a81b3fbda297cc79dd8a963f1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8
bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765245965724590043,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfde71ed46fcf3e1413b4854831f8db9,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0150a2026dd7fbd8eef1be4f1149d3e711240afe63836d16882d2d4d2cd9d575,PodSandboxId:24063f0110f95e10240d247cba620e91ca39c134e08071df109b36ed7d5d6662,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581
b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765245965754099763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5986582f384e8b76964852dba738451,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33019257a9a137de2f80ac7281f6d3349e5a11dccb52dfd133715a38715898e6,PodSandboxId:9bafe39d1fb4e9eca47c6e6b77ed0e9de1020797cc92de67eac433fe3f043fcc,Metadata:&ContainerMetada
ta{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765245965692497170,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d49e5f2eca05156c73661728e6bef94,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0791a3f540c43064eea797dadcd0d0b96a847e6c2b7d19bc79595a654597
4c2c,PodSandboxId:b57d9da4d6ff79239594fec3344c2d956d8f4be7ce4466409183d6f31fe170ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765245925898872103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gzjhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ccb903-d2f9-4e8c-b9fe-25384e736e56,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness
-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a691542f0f67679391b917d59ffd9d5fe7c07b0b46dab363cd0d823a78d97cc,PodSandboxId:44f15f5505463fe9d6adebc9fdd36d278e59ed90cd5ac660218c41fafdef026d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765245925874479288,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zwr8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17f41900-ed85-412a-b753-83ab0612a0d0,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b94476afa73e7f5b21730d0829202d1d6524a0e811201d33665db75577741924,PodSandboxId:7471f0469bc616aead498ee3784d1390a01a97599c3d5a5318444e7deaa41c20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765245925885367398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da85099-845e-43c0-abe3-694b2e59c644,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bd02c96f2d3339fa9d3bd443fc7063b2baf4b197fa5941beab45b59e9f1e71,PodSandboxId:c48aca2827221a2e8fa145591415bf88bceb762f5b087eed3f0844db1fdbb470,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765245921265200501,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfde71ed46fcf3e1413b4854831f8db9,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5e268ecdaf2fde98e798be33bab31cafba814db74814a43faf6667e7ce4f8c,PodSandboxId:c758dc0b99425c52831bdd825fbfbebf76a7560fb6bd8235f50f606ef4dd19d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765245921241204939,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-545294,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: ff84ec456446fa53da87afee0f115e3f,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04529d5f4a6494a40a53f85e66394b0cccf5a9dfa167ac714917d9c21812746c,PodSandboxId:72521b6a719a676feec078bcc76ce9308b46e162a4af8f254a06efb00c8f8e55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765245921228615658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5986582f384e8b76964852dba738451,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=89be04b4-3fac-42dc-a8dc-9f138aa172a2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.756932310Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27e39f5f-3cdb-48e0-b212-cc9746894314 name=/runtime.v1.RuntimeService/Version
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.757271597Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27e39f5f-3cdb-48e0-b212-cc9746894314 name=/runtime.v1.RuntimeService/Version
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.758906839Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b26acdb-3ac5-4d08-974e-f9dba92aa01b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.760251236Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765246591760224903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:240114,},InodesUsed:&UInt64Value{Value:104,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b26acdb-3ac5-4d08-974e-f9dba92aa01b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.761451943Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eabd24bc-cb99-4e99-9622-ecdc61924e59 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.761506503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eabd24bc-cb99-4e99-9622-ecdc61924e59 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:16:31 functional-545294 crio[5890]: time="2025-12-09 02:16:31.761800240Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47025c8e887c9b1c3ff3b4d7ad309846f91891a5cfc411e153c7bea0e23bdd24,PodSandboxId:831e808d811a5b3edbc977765dfe4d922f201a07cb785f406d402f2e2138e496,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765246101229560357,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-nbwpp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0f362234-70c0-47ff-afab-c6cb6c695ef6,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:291f1e39539ff7d651b830e7e042f1b9fcdb535d35c3bb69037513f9a244efe0,PodSandboxId:270dd75437f03d4bdcdca5fab35a5917a40402319586d912142502218051eb5a,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765246064745716091,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c8feca44-44ed-4eb4-8817-2e317d08cf50,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ffd46bc92d655ae18972da483ae0b30262242aca0b5bbbf3d3e198e2b48fbc,PodSandboxId:4229a9648f0388b0c24f25f7aa77aacf8bd14e2cd4c7af11f8e5436d683ca9b2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765246053617985118,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ebc811a7-8de7-4db7-ab8a-6466ab5be638,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb11585f8efb9685c531517ce6af0bd09674e3b5074d553d77e853bbacd7d794,PodSandboxId:b0d4c688e00ac0c63523f99e02d00e98818433a710144919e971d2dfc83f7472,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765245970662252981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gzjhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ccb903-d2f9-4e8c-b9fe-25384e736e56,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0eb6016c3c8701b48b1c0daf6bd0ec2d9b246d60f5a2d55046e6c67d5e54cd,PodSandboxId:93dd276d617f88e3fcb4ba99fec9ce51d91246375b2553a08d899efea75fb28f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CO
NTAINER_RUNNING,CreatedAt:1765245970630464144,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da85099-845e-43c0-abe3-694b2e59c644,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32e467a00eda98540f5fcd1ff5ac24480720d3fe71af69bd08b899810f97631,PodSandboxId:549a6ed604b063b938220bc6c839a85ec7d5c9fa23b1991faf03dd3a416180ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,Creat
edAt:1765245970581962315,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zwr8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17f41900-ed85-412a-b753-83ab0612a0d0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600d2ecc96b44f7021b08ce2b5646ec800acfbb9b54d330fe059d5883db48b2e,PodSandboxId:06a38f77b61a47f375fe95f6b21b3046fbdf10582e34e2a4260019e558d8e573,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765245965739866434,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff84ec456446fa53da87afee0f115e3f,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0561cdb1d940f72d9593f84899a0b04ab988d3ad75efe7b4ce43e03d904a29d,PodSandboxId:6e0a2877b725072401fab9146721d08dc56a4c4a81b3fbda297cc79dd8a963f1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8
bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765245965724590043,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfde71ed46fcf3e1413b4854831f8db9,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0150a2026dd7fbd8eef1be4f1149d3e711240afe63836d16882d2d4d2cd9d575,PodSandboxId:24063f0110f95e10240d247cba620e91ca39c134e08071df109b36ed7d5d6662,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581
b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765245965754099763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5986582f384e8b76964852dba738451,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33019257a9a137de2f80ac7281f6d3349e5a11dccb52dfd133715a38715898e6,PodSandboxId:9bafe39d1fb4e9eca47c6e6b77ed0e9de1020797cc92de67eac433fe3f043fcc,Metadata:&ContainerMetada
ta{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765245965692497170,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d49e5f2eca05156c73661728e6bef94,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0791a3f540c43064eea797dadcd0d0b96a847e6c2b7d19bc79595a654597
4c2c,PodSandboxId:b57d9da4d6ff79239594fec3344c2d956d8f4be7ce4466409183d6f31fe170ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765245925898872103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gzjhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ccb903-d2f9-4e8c-b9fe-25384e736e56,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness
-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a691542f0f67679391b917d59ffd9d5fe7c07b0b46dab363cd0d823a78d97cc,PodSandboxId:44f15f5505463fe9d6adebc9fdd36d278e59ed90cd5ac660218c41fafdef026d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765245925874479288,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zwr8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17f41900-ed85-412a-b753-83ab0612a0d0,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b94476afa73e7f5b21730d0829202d1d6524a0e811201d33665db75577741924,PodSandboxId:7471f0469bc616aead498ee3784d1390a01a97599c3d5a5318444e7deaa41c20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765245925885367398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7da85099-845e-43c0-abe3-694b2e59c644,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3bd02c96f2d3339fa9d3bd443fc7063b2baf4b197fa5941beab45b59e9f1e71,PodSandboxId:c48aca2827221a2e8fa145591415bf88bceb762f5b087eed3f0844db1fdbb470,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765245921265200501,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfde71ed46fcf3e1413b4854831f8db9,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5e268ecdaf2fde98e798be33bab31cafba814db74814a43faf6667e7ce4f8c,PodSandboxId:c758dc0b99425c52831bdd825fbfbebf76a7560fb6bd8235f50f606ef4dd19d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765245921241204939,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-545294,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: ff84ec456446fa53da87afee0f115e3f,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04529d5f4a6494a40a53f85e66394b0cccf5a9dfa167ac714917d9c21812746c,PodSandboxId:72521b6a719a676feec078bcc76ce9308b46e162a4af8f254a06efb00c8f8e55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765245921228615658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-545294,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5986582f384e8b76964852dba738451,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eabd24bc-cb99-4e99-9622-ecdc61924e59 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                         CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	47025c8e887c9       public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036   8 minutes ago       Running             mysql                     0                   831e808d811a5       mysql-6bcdcbc558-nbwpp                      default
	291f1e39539ff       d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9                                              8 minutes ago       Running             myfrontend                0                   270dd75437f03       sp-pod                                      default
	19ffd46bc92d6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e           8 minutes ago       Exited              mount-munger              0                   4229a9648f038       busybox-mount                               default
	bb11585f8efb9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                              10 minutes ago      Running             coredns                   3                   b0d4c688e00ac       coredns-66bc5c9577-gzjhc                    kube-system
	9a0eb6016c3c8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              10 minutes ago      Running             storage-provisioner       4                   93dd276d617f8       storage-provisioner                         kube-system
	e32e467a00eda       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                              10 minutes ago      Running             kube-proxy                3                   549a6ed604b06       kube-proxy-zwr8l                            kube-system
	0150a2026dd7f       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              10 minutes ago      Running             etcd                      3                   24063f0110f95       etcd-functional-545294                      kube-system
	600d2ecc96b44       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                              10 minutes ago      Running             kube-scheduler            3                   06a38f77b61a4       kube-scheduler-functional-545294            kube-system
	d0561cdb1d940       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                              10 minutes ago      Running             kube-controller-manager   3                   6e0a2877b7250       kube-controller-manager-functional-545294   kube-system
	33019257a9a13       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                              10 minutes ago      Running             kube-apiserver            0                   9bafe39d1fb4e       kube-apiserver-functional-545294            kube-system
	0791a3f540c43       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                              11 minutes ago      Exited              coredns                   2                   b57d9da4d6ff7       coredns-66bc5c9577-gzjhc                    kube-system
	b94476afa73e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              11 minutes ago      Exited              storage-provisioner       3                   7471f0469bc61       storage-provisioner                         kube-system
	1a691542f0f67       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                              11 minutes ago      Exited              kube-proxy                2                   44f15f5505463       kube-proxy-zwr8l                            kube-system
	c3bd02c96f2d3       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                              11 minutes ago      Exited              kube-controller-manager   2                   c48aca2827221       kube-controller-manager-functional-545294   kube-system
	dd5e268ecdaf2       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                              11 minutes ago      Exited              kube-scheduler            2                   c758dc0b99425       kube-scheduler-functional-545294            kube-system
	04529d5f4a649       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              11 minutes ago      Exited              etcd                      2                   72521b6a719a6       etcd-functional-545294                      kube-system
	
	
	==> coredns [0791a3f540c43064eea797dadcd0d0b96a847e6c2b7d19bc79595a6545974c2c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54594 - 51569 "HINFO IN 8372181516051953211.77839456938040332. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.046581635s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bb11585f8efb9685c531517ce6af0bd09674e3b5074d553d77e853bbacd7d794] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35309 - 28422 "HINFO IN 307392218677794923.4186193438894601201. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.078841625s
	
	
	==> describe nodes <==
	Name:               functional-545294
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-545294
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=functional-545294
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_04_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:04:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-545294
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:16:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:15:11 +0000   Tue, 09 Dec 2025 02:04:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:15:11 +0000   Tue, 09 Dec 2025 02:04:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:15:11 +0000   Tue, 09 Dec 2025 02:04:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 02:15:11 +0000   Tue, 09 Dec 2025 02:04:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    functional-545294
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 647139dfa1964d2db5480bfef1b99acc
	  System UUID:                647139df-a196-4d2d-b548-0bfef1b99acc
	  Boot ID:                    37f77225-d1dd-43f8-856c-62cf02b08d24
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-bjmjc                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-ztccb           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-6bcdcbc558-nbwpp                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    8m51s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m48s
	  kube-system                 coredns-66bc5c9577-gzjhc                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-545294                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-545294              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-545294     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-zwr8l                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-545294              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-rmsbb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m41s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8dzft         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-545294 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-545294 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-545294 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeReady                12m                kubelet          Node functional-545294 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node functional-545294 event: Registered Node functional-545294 in Controller
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-545294 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-545294 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-545294 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-545294 event: Registered Node functional-545294 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-545294 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-545294 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-545294 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-545294 event: Registered Node functional-545294 in Controller
	
	
	==> dmesg <==
	[  +1.190372] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.083740] kauditd_printk_skb: 1 callbacks suppressed
	[Dec 9 02:04] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.139051] kauditd_printk_skb: 171 callbacks suppressed
	[  +1.032005] kauditd_printk_skb: 18 callbacks suppressed
	[ +29.006070] kauditd_printk_skb: 220 callbacks suppressed
	[Dec 9 02:05] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.640636] kauditd_printk_skb: 291 callbacks suppressed
	[  +0.430584] kauditd_printk_skb: 222 callbacks suppressed
	[  +4.680345] kauditd_printk_skb: 58 callbacks suppressed
	[  +4.780090] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.109932] kauditd_printk_skb: 12 callbacks suppressed
	[Dec 9 02:06] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.609542] kauditd_printk_skb: 167 callbacks suppressed
	[  +5.081016] kauditd_printk_skb: 133 callbacks suppressed
	[  +2.048397] kauditd_printk_skb: 91 callbacks suppressed
	[  +0.000071] kauditd_printk_skb: 74 callbacks suppressed
	[Dec 9 02:07] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.774352] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.241749] kauditd_printk_skb: 109 callbacks suppressed
	[Dec 9 02:08] kauditd_printk_skb: 74 callbacks suppressed
	[ +14.356751] crun[10477]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +2.479232] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [0150a2026dd7fbd8eef1be4f1149d3e711240afe63836d16882d2d4d2cd9d575] <==
	{"level":"warn","ts":"2025-12-09T02:08:20.003095Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"177.631085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:08:20.003168Z","caller":"traceutil/trace.go:172","msg":"trace[1020812714] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:912; }","duration":"177.66913ms","start":"2025-12-09T02:08:19.825450Z","end":"2025-12-09T02:08:20.003119Z","steps":["trace[1020812714] 'agreement among raft nodes before linearized reading'  (duration: 177.327113ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:08:20.002889Z","caller":"traceutil/trace.go:172","msg":"trace[1367457961] transaction","detail":"{read_only:false; response_revision:912; number_of_response:1; }","duration":"391.382608ms","start":"2025-12-09T02:08:19.611496Z","end":"2025-12-09T02:08:20.002878Z","steps":["trace[1367457961] 'process raft request'  (duration: 391.019222ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:08:20.005980Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T02:08:19.611479Z","time spent":"394.340715ms","remote":"127.0.0.1:54568","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:911 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-12-09T02:08:20.336700Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.249146ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:08:20.336765Z","caller":"traceutil/trace.go:172","msg":"trace[144813138] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:912; }","duration":"109.377971ms","start":"2025-12-09T02:08:20.227376Z","end":"2025-12-09T02:08:20.336754Z","steps":["trace[144813138] 'range keys from in-memory index tree'  (duration: 109.196157ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:08:20.810490Z","caller":"traceutil/trace.go:172","msg":"trace[1619745664] linearizableReadLoop","detail":"{readStateIndex:1016; appliedIndex:1016; }","duration":"192.187362ms","start":"2025-12-09T02:08:20.618287Z","end":"2025-12-09T02:08:20.810474Z","steps":["trace[1619745664] 'read index received'  (duration: 192.182489ms)","trace[1619745664] 'applied index is now lower than readState.Index'  (duration: 4.268µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-09T02:08:20.810573Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"192.2733ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:08:20.810588Z","caller":"traceutil/trace.go:172","msg":"trace[1911247656] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:912; }","duration":"192.300929ms","start":"2025-12-09T02:08:20.618283Z","end":"2025-12-09T02:08:20.810584Z","steps":["trace[1911247656] 'agreement among raft nodes before linearized reading'  (duration: 192.245332ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:08:22.742838Z","caller":"traceutil/trace.go:172","msg":"trace[837358911] transaction","detail":"{read_only:false; response_revision:927; number_of_response:1; }","duration":"142.237669ms","start":"2025-12-09T02:08:22.600586Z","end":"2025-12-09T02:08:22.742824Z","steps":["trace[837358911] 'process raft request'  (duration: 142.154199ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:08:22.743170Z","caller":"traceutil/trace.go:172","msg":"trace[756897999] linearizableReadLoop","detail":"{readStateIndex:1032; appliedIndex:1033; }","duration":"123.003501ms","start":"2025-12-09T02:08:22.620157Z","end":"2025-12-09T02:08:22.743160Z","steps":["trace[756897999] 'read index received'  (duration: 123.000464ms)","trace[756897999] 'applied index is now lower than readState.Index'  (duration: 2.391µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-09T02:08:22.743236Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.067875ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:08:22.743251Z","caller":"traceutil/trace.go:172","msg":"trace[1578570093] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:927; }","duration":"123.093605ms","start":"2025-12-09T02:08:22.620153Z","end":"2025-12-09T02:08:22.743247Z","steps":["trace[1578570093] 'agreement among raft nodes before linearized reading'  (duration: 123.040471ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:08:26.193519Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"444.5099ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9433522797471565165 > lease_revoke:<id:02ea9b00dbd784e4>","response":"size:29"}
	{"level":"info","ts":"2025-12-09T02:08:26.193632Z","caller":"traceutil/trace.go:172","msg":"trace[1284558279] linearizableReadLoop","detail":"{readStateIndex:1035; appliedIndex:1034; }","duration":"370.030474ms","start":"2025-12-09T02:08:25.823592Z","end":"2025-12-09T02:08:26.193622Z","steps":["trace[1284558279] 'read index received'  (duration: 25.536µs)","trace[1284558279] 'applied index is now lower than readState.Index'  (duration: 370.004319ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-09T02:08:26.193849Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"370.250113ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:08:26.193891Z","caller":"traceutil/trace.go:172","msg":"trace[1953016275] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:928; }","duration":"370.296736ms","start":"2025-12-09T02:08:25.823588Z","end":"2025-12-09T02:08:26.193885Z","steps":["trace[1953016275] 'agreement among raft nodes before linearized reading'  (duration: 370.22523ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:08:26.193938Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T02:08:25.823573Z","time spent":"370.333531ms","remote":"127.0.0.1:54598","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-12-09T02:08:26.194174Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"247.91798ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:08:26.194212Z","caller":"traceutil/trace.go:172","msg":"trace[1807749834] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:928; }","duration":"247.958108ms","start":"2025-12-09T02:08:25.946248Z","end":"2025-12-09T02:08:26.194207Z","steps":["trace[1807749834] 'agreement among raft nodes before linearized reading'  (duration: 247.903203ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:08:26.194400Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.073937ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-12-09T02:08:26.194439Z","caller":"traceutil/trace.go:172","msg":"trace[660959889] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:928; }","duration":"140.114024ms","start":"2025-12-09T02:08:26.054319Z","end":"2025-12-09T02:08:26.194433Z","steps":["trace[660959889] 'agreement among raft nodes before linearized reading'  (duration: 140.020194ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:16:06.925846Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1096}
	{"level":"info","ts":"2025-12-09T02:16:06.952777Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1096,"took":"26.232203ms","hash":2480134437,"current-db-size-bytes":3534848,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1609728,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-12-09T02:16:06.952842Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2480134437,"revision":1096,"compact-revision":-1}
	
	
	==> etcd [04529d5f4a6494a40a53f85e66394b0cccf5a9dfa167ac714917d9c21812746c] <==
	{"level":"warn","ts":"2025-12-09T02:05:23.338509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:05:23.353218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:05:23.371748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:05:23.381964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:05:23.399347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:05:23.412155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:05:23.509067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40368","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-09T02:05:48.441858Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-09T02:05:48.443863Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-545294","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.184:2380"],"advertise-client-urls":["https://192.168.39.184:2379"]}
	{"level":"error","ts":"2025-12-09T02:05:48.444832Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-09T02:05:48.538599Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-09T02:05:48.538657Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T02:05:48.538675Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"989272a6374482ea","current-leader-member-id":"989272a6374482ea"}
	{"level":"info","ts":"2025-12-09T02:05:48.538752Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-09T02:05:48.538762Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-09T02:05:48.539259Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.184:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-09T02:05:48.539452Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.184:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-09T02:05:48.539479Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.184:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-09T02:05:48.539583Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-09T02:05:48.539681Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-09T02:05:48.539707Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T02:05:48.542133Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.184:2380"}
	{"level":"error","ts":"2025-12-09T02:05:48.542189Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.184:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T02:05:48.542210Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.184:2380"}
	{"level":"info","ts":"2025-12-09T02:05:48.542215Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-545294","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.184:2380"],"advertise-client-urls":["https://192.168.39.184:2379"]}
	
	
	==> kernel <==
	 02:16:32 up 12 min,  0 users,  load average: 0.11, 0.37, 0.30
	Linux functional-545294 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [33019257a9a137de2f80ac7281f6d3349e5a11dccb52dfd133715a38715898e6] <==
	I1209 02:06:09.205599       1 cache.go:39] Caches are synced for autoregister controller
	I1209 02:06:09.797123       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 02:06:09.986149       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 02:06:11.123786       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 02:06:11.196376       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1209 02:06:11.249317       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:06:11.266771       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:06:12.522472       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 02:06:12.772908       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 02:06:12.872500       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 02:06:24.900942       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.135.133"}
	I1209 02:06:30.217791       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.42.231"}
	I1209 02:06:30.606675       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.198.212"}
	I1209 02:07:41.772495       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.110.148.249"}
	E1209 02:07:42.840402       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8441->192.168.39.1:42032: use of closed network connection
	E1209 02:07:50.328340       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8441->192.168.39.1:45564: use of closed network connection
	I1209 02:07:51.350074       1 controller.go:667] quota admission added evaluator for: namespaces
	I1209 02:07:51.768263       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.101.45"}
	I1209 02:07:51.802791       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.67.131"}
	E1209 02:08:26.987317       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8441->192.168.39.1:51836: use of closed network connection
	E1209 02:08:28.054828       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8441->192.168.39.1:51860: use of closed network connection
	E1209 02:08:29.468371       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8441->192.168.39.1:51870: use of closed network connection
	E1209 02:08:31.316636       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8441->192.168.39.1:51892: use of closed network connection
	E1209 02:08:35.158814       1 conn.go:339] Error on socket receive: read tcp 192.168.39.184:8441->192.168.39.1:54452: use of closed network connection
	I1209 02:16:09.104386       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [c3bd02c96f2d3339fa9d3bd443fc7063b2baf4b197fa5941beab45b59e9f1e71] <==
	I1209 02:05:27.470621       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1209 02:05:27.471242       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-545294"
	I1209 02:05:27.471430       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1209 02:05:27.474344       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1209 02:05:27.477438       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1209 02:05:27.477546       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1209 02:05:27.481982       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1209 02:05:27.485874       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1209 02:05:27.493943       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1209 02:05:27.496206       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1209 02:05:27.501290       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1209 02:05:27.502862       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1209 02:05:27.503047       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1209 02:05:27.503125       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1209 02:05:27.504186       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1209 02:05:27.509366       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1209 02:05:27.514400       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1209 02:05:27.516437       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 02:05:27.518312       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1209 02:05:27.522237       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1209 02:05:27.568197       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1209 02:05:27.694771       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 02:05:27.702894       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 02:05:27.703086       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1209 02:05:27.703118       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [d0561cdb1d940f72d9593f84899a0b04ab988d3ad75efe7b4ce43e03d904a29d] <==
	I1209 02:06:12.519650       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1209 02:06:12.519669       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1209 02:06:12.519945       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1209 02:06:12.522083       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1209 02:06:12.525421       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1209 02:06:12.526643       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1209 02:06:12.533477       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 02:06:12.533442       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1209 02:06:12.536732       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 02:06:12.539185       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1209 02:06:12.539288       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1209 02:06:12.539422       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1209 02:06:12.539516       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-545294"
	I1209 02:06:12.539550       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1209 02:06:12.542145       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1209 02:06:12.545166       1 shared_informer.go:356] "Caches are synced" controller="job"
	E1209 02:07:51.480180       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:07:51.487321       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:07:51.509273       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:07:51.515375       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:07:51.542183       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:07:51.545675       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:07:51.572173       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:07:51.572206       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:07:51.584118       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [1a691542f0f67679391b917d59ffd9d5fe7c07b0b46dab363cd0d823a78d97cc] <==
	I1209 02:05:26.320349       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 02:05:26.421246       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 02:05:26.421613       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.184"]
	E1209 02:05:26.422529       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:05:26.525897       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1209 02:05:26.525977       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 02:05:26.526602       1 server_linux.go:132] "Using iptables Proxier"
	I1209 02:05:26.562347       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:05:26.563653       1 server.go:527] "Version info" version="v1.34.2"
	I1209 02:05:26.563962       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:05:26.579817       1 config.go:200] "Starting service config controller"
	I1209 02:05:26.580206       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:05:26.580444       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:05:26.580517       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:05:26.583355       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:05:26.583455       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:05:26.590933       1 config.go:309] "Starting node config controller"
	I1209 02:05:26.591060       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:05:26.591069       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:05:26.680620       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 02:05:26.680658       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1209 02:05:26.683597       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [e32e467a00eda98540f5fcd1ff5ac24480720d3fe71af69bd08b899810f97631] <==
	I1209 02:06:11.104122       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 02:06:11.205083       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 02:06:11.205110       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.184"]
	E1209 02:06:11.205200       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:06:11.337784       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1209 02:06:11.337915       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 02:06:11.337954       1 server_linux.go:132] "Using iptables Proxier"
	I1209 02:06:11.361533       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:06:11.361939       1 server.go:527] "Version info" version="v1.34.2"
	I1209 02:06:11.363211       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:06:11.379755       1 config.go:200] "Starting service config controller"
	I1209 02:06:11.379769       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:06:11.379788       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:06:11.379797       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:06:11.379838       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:06:11.379843       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:06:11.380923       1 config.go:309] "Starting node config controller"
	I1209 02:06:11.385662       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:06:11.385780       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:06:11.481250       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 02:06:11.481382       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 02:06:11.484130       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [600d2ecc96b44f7021b08ce2b5646ec800acfbb9b54d330fe059d5883db48b2e] <==
	I1209 02:06:06.775309       1 serving.go:386] Generated self-signed cert in-memory
	W1209 02:06:09.046112       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 02:06:09.046155       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 02:06:09.046165       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 02:06:09.046171       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 02:06:09.110256       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1209 02:06:09.110354       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:06:09.116483       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:06:09.116532       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:06:09.117060       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 02:06:09.117800       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 02:06:09.217073       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [dd5e268ecdaf2fde98e798be33bab31cafba814db74814a43faf6667e7ce4f8c] <==
	I1209 02:05:23.247182       1 serving.go:386] Generated self-signed cert in-memory
	W1209 02:05:24.076293       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 02:05:24.076332       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 02:05:24.076346       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 02:05:24.076352       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 02:05:24.177875       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1209 02:05:24.181087       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:05:24.186865       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:05:24.186913       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:05:24.188170       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 02:05:24.189648       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 02:05:24.287594       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:05:48.472329       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:05:48.472480       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1209 02:05:48.472497       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1209 02:05:48.472523       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1209 02:05:48.472572       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1209 02:05:48.472589       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 09 02:15:56 functional-545294 kubelet[6253]: E1209 02:15:56.919741    6253 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 09 02:15:56 functional-545294 kubelet[6253]: E1209 02:15:56.919827    6253 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 09 02:15:56 functional-545294 kubelet[6253]: E1209 02:15:56.920056    6253 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-bjmjc_default(400c5698-3c29-4604-a2e3-7446ec94865f): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 09 02:15:56 functional-545294 kubelet[6253]: E1209 02:15:56.920118    6253 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-bjmjc" podUID="400c5698-3c29-4604-a2e3-7446ec94865f"
	Dec 09 02:16:04 functional-545294 kubelet[6253]: E1209 02:16:04.898255    6253 manager.go:1116] Failed to create existing container: /kubepods/burstable/podff84ec456446fa53da87afee0f115e3f/crio-c758dc0b99425c52831bdd825fbfbebf76a7560fb6bd8235f50f606ef4dd19d6: Error finding container c758dc0b99425c52831bdd825fbfbebf76a7560fb6bd8235f50f606ef4dd19d6: Status 404 returned error can't find the container with id c758dc0b99425c52831bdd825fbfbebf76a7560fb6bd8235f50f606ef4dd19d6
	Dec 09 02:16:04 functional-545294 kubelet[6253]: E1209 02:16:04.899504    6253 manager.go:1116] Failed to create existing container: /kubepods/burstable/poda5986582f384e8b76964852dba738451/crio-72521b6a719a676feec078bcc76ce9308b46e162a4af8f254a06efb00c8f8e55: Error finding container 72521b6a719a676feec078bcc76ce9308b46e162a4af8f254a06efb00c8f8e55: Status 404 returned error can't find the container with id 72521b6a719a676feec078bcc76ce9308b46e162a4af8f254a06efb00c8f8e55
	Dec 09 02:16:04 functional-545294 kubelet[6253]: E1209 02:16:04.900167    6253 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod7da85099-845e-43c0-abe3-694b2e59c644/crio-7471f0469bc616aead498ee3784d1390a01a97599c3d5a5318444e7deaa41c20: Error finding container 7471f0469bc616aead498ee3784d1390a01a97599c3d5a5318444e7deaa41c20: Status 404 returned error can't find the container with id 7471f0469bc616aead498ee3784d1390a01a97599c3d5a5318444e7deaa41c20
	Dec 09 02:16:04 functional-545294 kubelet[6253]: E1209 02:16:04.900746    6253 manager.go:1116] Failed to create existing container: /kubepods/burstable/poddfde71ed46fcf3e1413b4854831f8db9/crio-c48aca2827221a2e8fa145591415bf88bceb762f5b087eed3f0844db1fdbb470: Error finding container c48aca2827221a2e8fa145591415bf88bceb762f5b087eed3f0844db1fdbb470: Status 404 returned error can't find the container with id c48aca2827221a2e8fa145591415bf88bceb762f5b087eed3f0844db1fdbb470
	Dec 09 02:16:04 functional-545294 kubelet[6253]: E1209 02:16:04.901171    6253 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod17f41900-ed85-412a-b753-83ab0612a0d0/crio-44f15f5505463fe9d6adebc9fdd36d278e59ed90cd5ac660218c41fafdef026d: Error finding container 44f15f5505463fe9d6adebc9fdd36d278e59ed90cd5ac660218c41fafdef026d: Status 404 returned error can't find the container with id 44f15f5505463fe9d6adebc9fdd36d278e59ed90cd5ac660218c41fafdef026d
	Dec 09 02:16:04 functional-545294 kubelet[6253]: E1209 02:16:04.901361    6253 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod66ccb903-d2f9-4e8c-b9fe-25384e736e56/crio-b57d9da4d6ff79239594fec3344c2d956d8f4be7ce4466409183d6f31fe170ab: Error finding container b57d9da4d6ff79239594fec3344c2d956d8f4be7ce4466409183d6f31fe170ab: Status 404 returned error can't find the container with id b57d9da4d6ff79239594fec3344c2d956d8f4be7ce4466409183d6f31fe170ab
	Dec 09 02:16:05 functional-545294 kubelet[6253]: E1209 02:16:05.089214    6253 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765246565088805936 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:240114} inodes_used:{value:104}}"
	Dec 09 02:16:05 functional-545294 kubelet[6253]: E1209 02:16:05.089278    6253 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765246565088805936 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:240114} inodes_used:{value:104}}"
	Dec 09 02:16:05 functional-545294 kubelet[6253]: E1209 02:16:05.794425    6253 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-ztccb" podUID="81ac73b0-bd30-4571-9cf7-60cac343d17f"
	Dec 09 02:16:11 functional-545294 kubelet[6253]: E1209 02:16:11.795420    6253 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-bjmjc" podUID="400c5698-3c29-4604-a2e3-7446ec94865f"
	Dec 09 02:16:15 functional-545294 kubelet[6253]: E1209 02:16:15.091780    6253 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765246575091404623 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:240114} inodes_used:{value:104}}"
	Dec 09 02:16:15 functional-545294 kubelet[6253]: E1209 02:16:15.092179    6253 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765246575091404623 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:240114} inodes_used:{value:104}}"
	Dec 09 02:16:17 functional-545294 kubelet[6253]: E1209 02:16:17.795595    6253 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-ztccb" podUID="81ac73b0-bd30-4571-9cf7-60cac343d17f"
	Dec 09 02:16:22 functional-545294 kubelet[6253]: E1209 02:16:22.796934    6253 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-bjmjc" podUID="400c5698-3c29-4604-a2e3-7446ec94865f"
	Dec 09 02:16:25 functional-545294 kubelet[6253]: E1209 02:16:25.094438    6253 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765246585094076288 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:240114} inodes_used:{value:104}}"
	Dec 09 02:16:25 functional-545294 kubelet[6253]: E1209 02:16:25.094666    6253 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765246585094076288 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:240114} inodes_used:{value:104}}"
	Dec 09 02:16:27 functional-545294 kubelet[6253]: E1209 02:16:27.026185    6253 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 09 02:16:27 functional-545294 kubelet[6253]: E1209 02:16:27.026236    6253 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 09 02:16:27 functional-545294 kubelet[6253]: E1209 02:16:27.026528    6253 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-8dzft_kubernetes-dashboard(dbd91bdc-46aa-41fc-a596-f6f221db50ff): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 09 02:16:27 functional-545294 kubelet[6253]: E1209 02:16:27.026566    6253 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8dzft" podUID="dbd91bdc-46aa-41fc-a596-f6f221db50ff"
	Dec 09 02:16:28 functional-545294 kubelet[6253]: E1209 02:16:28.796813    6253 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-ztccb" podUID="81ac73b0-bd30-4571-9cf7-60cac343d17f"
	
	
	==> storage-provisioner [9a0eb6016c3c8701b48b1c0daf6bd0ec2d9b246d60f5a2d55046e6c67d5e54cd] <==
	W1209 02:16:07.029496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:09.033561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:09.041196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:11.044354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:11.050244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:13.054247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:13.063115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:15.067703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:15.076938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:17.081354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:17.088902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:19.092491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:19.098302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:21.102423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:21.112281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:23.116665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:23.127773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:25.131833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:25.138793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:27.143643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:27.150065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:29.153623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:29.163524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:31.169974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:16:31.177793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b94476afa73e7f5b21730d0829202d1d6524a0e811201d33665db75577741924] <==
	I1209 02:05:26.093774       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 02:05:26.149266       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 02:05:26.149542       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1209 02:05:26.185123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:05:29.642726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:05:33.903543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:05:37.503047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:05:40.556654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:05:43.581531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:05:43.595885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:05:43.596320       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 02:05:43.596559       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-545294_5ba57f46-08dc-4cc2-9a36-72607c17a622!
	I1209 02:05:43.597686       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e6c0a043-6a90-4a97-a10c-da4de6c83992", APIVersion:"v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-545294_5ba57f46-08dc-4cc2-9a36-72607c17a622 became leader
	W1209 02:05:43.609289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:05:43.626360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1209 02:05:43.697697       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-545294_5ba57f46-08dc-4cc2-9a36-72607c17a622!
	W1209 02:05:45.630599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:05:45.644234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:05:47.648598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:05:47.654873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-545294 -n functional-545294
helpers_test.go:269: (dbg) Run:  kubectl --context functional-545294 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-bjmjc hello-node-connect-7d85dfc575-ztccb dashboard-metrics-scraper-77bf4d6c4c-rmsbb kubernetes-dashboard-855c9754f9-8dzft
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-545294 describe pod busybox-mount hello-node-75c85bcc94-bjmjc hello-node-connect-7d85dfc575-ztccb dashboard-metrics-scraper-77bf4d6c4c-rmsbb kubernetes-dashboard-855c9754f9-8dzft
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-545294 describe pod busybox-mount hello-node-75c85bcc94-bjmjc hello-node-connect-7d85dfc575-ztccb dashboard-metrics-scraper-77bf4d6c4c-rmsbb kubernetes-dashboard-855c9754f9-8dzft: exit status 1 (99.27147ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-545294/192.168.39.184
	Start Time:       Tue, 09 Dec 2025 02:06:37 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://19ffd46bc92d655ae18972da483ae0b30262242aca0b5bbbf3d3e198e2b48fbc
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 09 Dec 2025 02:07:33 +0000
	      Finished:     Tue, 09 Dec 2025 02:07:33 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zvlb9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zvlb9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m56s  default-scheduler  Successfully assigned default/busybox-mount to functional-545294
	  Normal  Pulling    9m56s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m     kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.243s (55.843s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m     kubelet            Created container: mount-munger
	  Normal  Started    9m     kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-bjmjc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-545294/192.168.39.184
	Start Time:       Tue, 09 Dec 2025 02:06:30 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8b2td (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8b2td:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-bjmjc to functional-545294
	  Warning  Failed     2m57s (x3 over 9m2s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    97s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     37s (x5 over 9m2s)    kubelet            Error: ErrImagePull
	  Warning  Failed     37s (x2 over 7m42s)   kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    11s (x12 over 9m2s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     11s (x12 over 9m2s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-ztccb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-545294/192.168.39.184
	Start Time:       Tue, 09 Dec 2025 02:06:30 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fwkqd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-fwkqd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ztccb to functional-545294
	  Warning  Failed     9m32s                 kubelet            Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m58s                 kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    119s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     81s (x5 over 9m32s)   kubelet            Error: ErrImagePull
	  Warning  Failed     81s (x3 over 8m27s)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     16s (x16 over 9m32s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    5s (x17 over 9m32s)   kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-rmsbb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-8dzft" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-545294 describe pod busybox-mount hello-node-75c85bcc94-bjmjc hello-node-connect-7d85dfc575-ztccb dashboard-metrics-scraper-77bf4d6c4c-rmsbb kubernetes-dashboard-855c9754f9-8dzft: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-545294 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-545294 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-bjmjc" [400c5698-3c29-4604-a2e3-7446ec94865f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-545294 -n functional-545294
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-09 02:16:30.923528122 +0000 UTC m=+1254.517148203
functional_test.go:1460: (dbg) Run:  kubectl --context functional-545294 describe po hello-node-75c85bcc94-bjmjc -n default
functional_test.go:1460: (dbg) kubectl --context functional-545294 describe po hello-node-75c85bcc94-bjmjc -n default:
Name:             hello-node-75c85bcc94-bjmjc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-545294/192.168.39.184
Start Time:       Tue, 09 Dec 2025 02:06:30 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8b2td (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8b2td:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-bjmjc to functional-545294
Warning  Failed     2m55s (x3 over 9m)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    95s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     35s (x5 over 9m)     kubelet            Error: ErrImagePull
Warning  Failed     35s (x2 over 7m40s)  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    9s (x12 over 9m)     kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     9s (x12 over 9m)     kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-545294 logs hello-node-75c85bcc94-bjmjc -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-545294 logs hello-node-75c85bcc94-bjmjc -n default: exit status 1 (74.522573ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-bjmjc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-545294 logs hello-node-75c85bcc94-bjmjc -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-545294 service --namespace=default --https --url hello-node: exit status 115 (269.956027ms)

                                                
                                                
-- stdout --
	https://192.168.39.184:30821
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-545294 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-545294 service hello-node --url --format={{.IP}}: exit status 115 (281.890652ms)

                                                
                                                
-- stdout --
	192.168.39.184
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-545294 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-545294 service hello-node --url: exit status 115 (283.437815ms)

                                                
                                                
-- stdout --
	http://192.168.39.184:30821
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-545294 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.184:30821
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (4.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-074400 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-074400 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-074400 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-074400 --alsologtostderr -v=1] stderr:
I1209 02:21:11.945292  270861 out.go:360] Setting OutFile to fd 1 ...
I1209 02:21:11.945592  270861 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:21:11.945607  270861 out.go:374] Setting ErrFile to fd 2...
I1209 02:21:11.945612  270861 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:21:11.945810  270861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
I1209 02:21:11.946140  270861 mustload.go:66] Loading cluster: functional-074400
I1209 02:21:11.946496  270861 config.go:182] Loaded profile config "functional-074400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:21:11.948592  270861 host.go:66] Checking if "functional-074400" exists ...
I1209 02:21:11.948810  270861 api_server.go:166] Checking apiserver status ...
I1209 02:21:11.948874  270861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1209 02:21:11.952057  270861 main.go:143] libmachine: domain functional-074400 has defined MAC address 52:54:00:65:dd:79 in network mk-functional-074400
I1209 02:21:11.952749  270861 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:dd:79", ip: ""} in network mk-functional-074400: {Iface:virbr1 ExpiryTime:2025-12-09 03:16:54 +0000 UTC Type:0 Mac:52:54:00:65:dd:79 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:functional-074400 Clientid:01:52:54:00:65:dd:79}
I1209 02:21:11.952788  270861 main.go:143] libmachine: domain functional-074400 has defined IP address 192.168.39.13 and MAC address 52:54:00:65:dd:79 in network mk-functional-074400
I1209 02:21:11.953034  270861 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/functional-074400/id_rsa Username:docker}
I1209 02:21:12.076897  270861 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6876/cgroup
W1209 02:21:12.098079  270861 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6876/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1209 02:21:12.098137  270861 ssh_runner.go:195] Run: ls
I1209 02:21:12.103897  270861 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8441/healthz ...
I1209 02:21:12.110945  270861 api_server.go:279] https://192.168.39.13:8441/healthz returned 200:
ok
W1209 02:21:12.111019  270861 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1209 02:21:12.111259  270861 config.go:182] Loaded profile config "functional-074400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:21:12.111283  270861 addons.go:70] Setting dashboard=true in profile "functional-074400"
I1209 02:21:12.111291  270861 addons.go:239] Setting addon dashboard=true in "functional-074400"
I1209 02:21:12.111318  270861 host.go:66] Checking if "functional-074400" exists ...
I1209 02:21:12.115439  270861 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1209 02:21:12.116991  270861 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1209 02:21:12.118995  270861 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1209 02:21:12.119017  270861 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1209 02:21:12.122221  270861 main.go:143] libmachine: domain functional-074400 has defined MAC address 52:54:00:65:dd:79 in network mk-functional-074400
I1209 02:21:12.122680  270861 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:dd:79", ip: ""} in network mk-functional-074400: {Iface:virbr1 ExpiryTime:2025-12-09 03:16:54 +0000 UTC Type:0 Mac:52:54:00:65:dd:79 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:functional-074400 Clientid:01:52:54:00:65:dd:79}
I1209 02:21:12.122716  270861 main.go:143] libmachine: domain functional-074400 has defined IP address 192.168.39.13 and MAC address 52:54:00:65:dd:79 in network mk-functional-074400
I1209 02:21:12.122934  270861 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/functional-074400/id_rsa Username:docker}
I1209 02:21:12.251139  270861 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1209 02:21:12.251171  270861 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1209 02:21:12.298363  270861 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1209 02:21:12.298396  270861 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1209 02:21:12.334142  270861 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1209 02:21:12.334172  270861 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1209 02:21:12.364805  270861 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1209 02:21:12.364848  270861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1209 02:21:12.402639  270861 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1209 02:21:12.402677  270861 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1209 02:21:12.429280  270861 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1209 02:21:12.429307  270861 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1209 02:21:12.456385  270861 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1209 02:21:12.456413  270861 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1209 02:21:12.483271  270861 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1209 02:21:12.483298  270861 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1209 02:21:12.514060  270861 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1209 02:21:12.514099  270861 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1209 02:21:12.541017  270861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1209 02:21:13.409672  270861 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-074400 addons enable metrics-server

                                                
                                                
I1209 02:21:13.411161  270861 addons.go:202] Writing out "functional-074400" config to set dashboard=true...
W1209 02:21:13.411466  270861 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1209 02:21:13.412187  270861 kapi.go:59] client config for functional-074400: &rest.Config{Host:"https://192.168.39.13:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.key", CAFile:"/home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28162e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1209 02:21:13.412663  270861 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1209 02:21:13.412701  270861 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1209 02:21:13.412710  270861 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1209 02:21:13.412719  270861 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1209 02:21:13.412726  270861 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1209 02:21:13.425933  270861 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  4daf7dd7-8884-49eb-af34-420aa286a480 928 0 2025-12-09 02:21:13 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-09 02:21:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.106.92.119,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.106.92.119],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1209 02:21:13.426131  270861 out.go:285] * Launching proxy ...
* Launching proxy ...
I1209 02:21:13.426209  270861 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-074400 proxy --port 36195]
I1209 02:21:13.426655  270861 dashboard.go:159] Waiting for kubectl to output host:port ...
I1209 02:21:13.480170  270861 out.go:203] 
W1209 02:21:13.481844  270861 out.go:285] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W1209 02:21:13.481866  270861 out.go:285] * 
* 
W1209 02:21:13.488702  270861 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1209 02:21:13.490778  270861 out.go:203] 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-074400 -n functional-074400
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-074400 logs -n 25: (1.676997696s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image     │ functional-074400 image save kicbase/echo-server:functional-074400 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:19 UTC │ 09 Dec 25 02:19 UTC │
	│ image     │ functional-074400 image rm kicbase/echo-server:functional-074400 --alsologtostderr                                                                           │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:19 UTC │ 09 Dec 25 02:19 UTC │
	│ image     │ functional-074400 image ls                                                                                                                                   │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:19 UTC │ 09 Dec 25 02:19 UTC │
	│ image     │ functional-074400 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:19 UTC │ 09 Dec 25 02:19 UTC │
	│ image     │ functional-074400 image ls                                                                                                                                   │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:19 UTC │ 09 Dec 25 02:19 UTC │
	│ image     │ functional-074400 image save --daemon kicbase/echo-server:functional-074400 --alsologtostderr                                                                │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:19 UTC │ 09 Dec 25 02:19 UTC │
	│ ssh       │ functional-074400 ssh stat /mount-9p/created-by-test                                                                                                         │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ ssh       │ functional-074400 ssh stat /mount-9p/created-by-pod                                                                                                          │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ ssh       │ functional-074400 ssh sudo umount -f /mount-9p                                                                                                               │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ mount     │ -p functional-074400 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo273279271/001:/mount-9p --alsologtostderr -v=1 --port 46464           │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ ssh       │ functional-074400 ssh findmnt -T /mount-9p | grep 9p                                                                                                         │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ ssh       │ functional-074400 ssh findmnt -T /mount-9p | grep 9p                                                                                                         │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ ssh       │ functional-074400 ssh -- ls -la /mount-9p                                                                                                                    │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ ssh       │ functional-074400 ssh sudo umount -f /mount-9p                                                                                                               │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ mount     │ -p functional-074400 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2669497316/001:/mount2 --alsologtostderr -v=1                         │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ mount     │ -p functional-074400 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2669497316/001:/mount1 --alsologtostderr -v=1                         │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ mount     │ -p functional-074400 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2669497316/001:/mount3 --alsologtostderr -v=1                         │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ ssh       │ functional-074400 ssh findmnt -T /mount1                                                                                                                     │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ ssh       │ functional-074400 ssh findmnt -T /mount1                                                                                                                     │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ ssh       │ functional-074400 ssh findmnt -T /mount2                                                                                                                     │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ ssh       │ functional-074400 ssh findmnt -T /mount3                                                                                                                     │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ mount     │ -p functional-074400 --kill=true                                                                                                                             │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ start     │ -p functional-074400 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                  │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:21 UTC │                     │
	│ start     │ -p functional-074400 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                            │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:21 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-074400 --alsologtostderr -v=1                                                                                               │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:21 UTC │                     │
	└───────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:21:11
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:21:11.668557  270845 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:21:11.668720  270845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:21:11.668728  270845 out.go:374] Setting ErrFile to fd 2...
	I1209 02:21:11.668735  270845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:21:11.669124  270845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 02:21:11.669862  270845 out.go:368] Setting JSON to false
	I1209 02:21:11.671248  270845 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":29022,"bootTime":1765217850,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:21:11.671349  270845 start.go:143] virtualization: kvm guest
	I1209 02:21:11.674987  270845 out.go:179] * [functional-074400] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:21:11.677897  270845 notify.go:221] Checking for updates...
	I1209 02:21:11.678058  270845 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:21:11.685602  270845 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:21:11.687614  270845 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 02:21:11.696389  270845 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	I1209 02:21:11.707660  270845 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:21:11.712881  270845 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:21:11.723017  270845 config.go:182] Loaded profile config "functional-074400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:21:11.723551  270845 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:21:11.824590  270845 out.go:179] * Using the kvm2 driver based on existing profile
	I1209 02:21:11.841576  270845 start.go:309] selected driver: kvm2
	I1209 02:21:11.841611  270845 start.go:927] validating driver "kvm2" against &{Name:functional-074400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-074400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:21:11.841758  270845 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:21:11.843215  270845 cni.go:84] Creating CNI manager for ""
	I1209 02:21:11.843311  270845 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 02:21:11.843361  270845 start.go:353] cluster config:
	{Name:functional-074400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074400 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:21:11.866464  270845 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.460408691Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765246874460378586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:215038,},InodesUsed:&UInt64Value{Value:88,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8ca05ae-43de-455e-aa2f-4a0fcdc5893a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.461492289Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a698ec3-d11c-43dc-bf9a-060727e5d88f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.461555208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a698ec3-d11c-43dc-bf9a-060727e5d88f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.461910827Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:93afd08a59ada97d1a31a37780c4f5983e4a5e3ddb9c08fa3f3d59d42259bbbb,PodSandboxId:2159112d0fa49467af9e1a91bf079d7c7c6002945edcc7b75abdec2da7af7814,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765246865079041234,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 431c06e4-5599-47d4-8f8e-fe047be3b9b9,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef945be98899fa8b02dd19aff3b8c77eb0f0c6ce94c3a69e0652753f1ff55a3,PodSandboxId:e7d154816b9d1fa1ac2af1fd38e335654c107702110bc6d8ec71c7e2051b9b93,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765246863291902987,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-r489h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a98ebf9-c223-495b-9d6b-890b748749e8,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b89d37aeec53b9d6ee80b63c22598071f010cd29423aebcc64906de620467314,PodSandboxId:8bf7207eb36542c95d86af3a2d3d637df56a27722fc7e059e32f9a59816f343a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765246813722508450,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 315ec66e-345d-4c53-a2a6-50f943add31b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14173d291caa4718370d24685ac59608d6db0097c2bb19631282e71389726769,PodSandboxId:c29d3b18a6a3610837f59130f5440c294d48219315b97d32918d55c95bc57db8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765246756144763673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqmgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7e3ff87-a95f-47b2-8a2b-c259bae12d26,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.res
tartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96bf9fc86c3c3ab191e90888c1eca1ed3482b2143a2ff87819c0e3b7b5aa541c,PodSandboxId:36a619ae05fdaec1e59a961632b7b4c39e227dbeb5c5d7fb7ab3e266ef416151,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765246756145489157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-jc7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43eea0bb-be89-4179-aa9e-6c2354730e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527eadb8601da5def17eaa2edb74967c026667ca94cc89d7396a212f7a334be8,PodSandboxId:c8ced35f3f07a378f22ed5f8f42d12c198079192e611afbbcdb40c044f19bfa7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:17652467535
79574512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e8c17203ca10918c9f38c7d0e332c8,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7e187bd9a3a6318fa2d523b92b9013a408f816ace5db8c1c222c7793427524f,PodSandboxId:823ae767de64dda5ab0fd523e009c04adda0bffb5e3d3923a0300aced80dd593,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7b
b6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765246753375190471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c61e57199dc27febe166ded52cc142d7,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5720f563ee852c04aade1c1dcfe0527e5d0b9cafff5f5324ea01a48641c2b879,PodSandboxId:f3953026b79acab4142afb82c0f731f122d103e98e4442c14ce7bf8018d8d677,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b
2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765246753347742444,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e1bacf59823142e176506131e39c07,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74ea4fb7f9b159bbd43d1477cadc8ec4c8fb68dcc7a6dd47eff4f26721c65aa,PodSandboxId:a08cf4f29ba9239e238b67ae93da1ccda30c769bd690fffb
ffd5375cb9f1ea16,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765246753320140788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71e4e0a7d57a210efdc84f9032753d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cfeb83409be8ec96c0b53f9c541a123c60a81424ed708b3
16900cfc6bac7634,PodSandboxId:fbd0c8d71de5a5378582509596f03922f8e7355ba930bdb0dc355697943c8a7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765246750746936761,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a490d4a5-6b55-4d29-b267-b700cba89a87,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e07378628f06fd0471083dd2035277e9a204d73d71caa2149d2520334a5a
8780,PodSandboxId:36a619ae05fdaec1e59a961632b7b4c39e227dbeb5c5d7fb7ab3e266ef416151,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765246728146643003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-jc7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43eea0bb-be89-4179-aa9e-6c2354730e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readines
s-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93888ae1c2ed9e59841961ffbd087abe1b960432ec14bd6ede69fe08b06f6528,PodSandboxId:fbd0c8d71de5a5378582509596f03922f8e7355ba930bdb0dc355697943c8a7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765246727258029938,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a490d4a5-6b55-4d29-b267-b700cba89a87,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b12a876b1607656dcae43a6f788e50cffa4515f9e671385f84c9294e3f8ea253,PodSandboxId:a08cf4f29ba9239e238b67ae93da1ccda30c769bd690fffbffd5375cb9f1ea16,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765246727110935066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71e4e0a7d57a210efdc84f9032753d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,i
o.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacf40dd7a6ed51348aa2ab49f8b52e918b4dfd14fb9632d8e828165a44be415,PodSandboxId:823ae767de64dda5ab0fd523e009c04adda0bffb5e3d3923a0300aced80dd593,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765246726970198747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: c61e57199dc27febe166ded52cc142d7,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4564cf1f6c21fbf6c388bc6c5703fdc330467e6a7b0e87256575ffbb8496510c,PodSandboxId:f3953026b79acab4142afb82c0f731f122d103e98e4442c14ce7bf8018d8d677,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765246726898352510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e1bacf59823142e176506131e39c07,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:823c02139d722aeeb211b14581e8aa5f8644ac71dc817330ea18d811ea6d2be9,PodSandboxId:c29d3b18a6a3610837f59130f5440c294d48219315b97d32918d55c95bc57db8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,St
ate:CONTAINER_EXITED,CreatedAt:1765246726822665281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqmgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7e3ff87-a95f-47b2-8a2b-c259bae12d26,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a698ec3-d11c-43dc-bf9a-060727e5d88f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.504578005Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3470614-873f-4a84-88c9-210b70806c86 name=/runtime.v1.RuntimeService/Version
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.504673590Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3470614-873f-4a84-88c9-210b70806c86 name=/runtime.v1.RuntimeService/Version
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.506227993Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=07ec57f0-65ca-4012-a979-1514c3c85b9c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.507546135Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765246874507518810,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:215038,},InodesUsed:&UInt64Value{Value:88,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=07ec57f0-65ca-4012-a979-1514c3c85b9c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.508875168Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e9173d6-20e1-4471-9c42-63c8d42325c4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.509146978Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e9173d6-20e1-4471-9c42-63c8d42325c4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.509765927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:93afd08a59ada97d1a31a37780c4f5983e4a5e3ddb9c08fa3f3d59d42259bbbb,PodSandboxId:2159112d0fa49467af9e1a91bf079d7c7c6002945edcc7b75abdec2da7af7814,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765246865079041234,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 431c06e4-5599-47d4-8f8e-fe047be3b9b9,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef945be98899fa8b02dd19aff3b8c77eb0f0c6ce94c3a69e0652753f1ff55a3,PodSandboxId:e7d154816b9d1fa1ac2af1fd38e335654c107702110bc6d8ec71c7e2051b9b93,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765246863291902987,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-r489h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a98ebf9-c223-495b-9d6b-890b748749e8,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b89d37aeec53b9d6ee80b63c22598071f010cd29423aebcc64906de620467314,PodSandboxId:8bf7207eb36542c95d86af3a2d3d637df56a27722fc7e059e32f9a59816f343a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765246813722508450,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 315ec66e-345d-4c53-a2a6-50f943add31b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14173d291caa4718370d24685ac59608d6db0097c2bb19631282e71389726769,PodSandboxId:c29d3b18a6a3610837f59130f5440c294d48219315b97d32918d55c95bc57db8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765246756144763673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqmgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7e3ff87-a95f-47b2-8a2b-c259bae12d26,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.res
tartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96bf9fc86c3c3ab191e90888c1eca1ed3482b2143a2ff87819c0e3b7b5aa541c,PodSandboxId:36a619ae05fdaec1e59a961632b7b4c39e227dbeb5c5d7fb7ab3e266ef416151,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765246756145489157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-jc7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43eea0bb-be89-4179-aa9e-6c2354730e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527eadb8601da5def17eaa2edb74967c026667ca94cc89d7396a212f7a334be8,PodSandboxId:c8ced35f3f07a378f22ed5f8f42d12c198079192e611afbbcdb40c044f19bfa7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:17652467535
79574512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e8c17203ca10918c9f38c7d0e332c8,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7e187bd9a3a6318fa2d523b92b9013a408f816ace5db8c1c222c7793427524f,PodSandboxId:823ae767de64dda5ab0fd523e009c04adda0bffb5e3d3923a0300aced80dd593,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7b
b6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765246753375190471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c61e57199dc27febe166ded52cc142d7,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5720f563ee852c04aade1c1dcfe0527e5d0b9cafff5f5324ea01a48641c2b879,PodSandboxId:f3953026b79acab4142afb82c0f731f122d103e98e4442c14ce7bf8018d8d677,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b
2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765246753347742444,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e1bacf59823142e176506131e39c07,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74ea4fb7f9b159bbd43d1477cadc8ec4c8fb68dcc7a6dd47eff4f26721c65aa,PodSandboxId:a08cf4f29ba9239e238b67ae93da1ccda30c769bd690fffb
ffd5375cb9f1ea16,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765246753320140788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71e4e0a7d57a210efdc84f9032753d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cfeb83409be8ec96c0b53f9c541a123c60a81424ed708b3
16900cfc6bac7634,PodSandboxId:fbd0c8d71de5a5378582509596f03922f8e7355ba930bdb0dc355697943c8a7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765246750746936761,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a490d4a5-6b55-4d29-b267-b700cba89a87,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e07378628f06fd0471083dd2035277e9a204d73d71caa2149d2520334a5a
8780,PodSandboxId:36a619ae05fdaec1e59a961632b7b4c39e227dbeb5c5d7fb7ab3e266ef416151,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765246728146643003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-jc7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43eea0bb-be89-4179-aa9e-6c2354730e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readines
s-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93888ae1c2ed9e59841961ffbd087abe1b960432ec14bd6ede69fe08b06f6528,PodSandboxId:fbd0c8d71de5a5378582509596f03922f8e7355ba930bdb0dc355697943c8a7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765246727258029938,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a490d4a5-6b55-4d29-b267-b700cba89a87,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b12a876b1607656dcae43a6f788e50cffa4515f9e671385f84c9294e3f8ea253,PodSandboxId:a08cf4f29ba9239e238b67ae93da1ccda30c769bd690fffbffd5375cb9f1ea16,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765246727110935066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71e4e0a7d57a210efdc84f9032753d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,i
o.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacf40dd7a6ed51348aa2ab49f8b52e918b4dfd14fb9632d8e828165a44be415,PodSandboxId:823ae767de64dda5ab0fd523e009c04adda0bffb5e3d3923a0300aced80dd593,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765246726970198747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: c61e57199dc27febe166ded52cc142d7,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4564cf1f6c21fbf6c388bc6c5703fdc330467e6a7b0e87256575ffbb8496510c,PodSandboxId:f3953026b79acab4142afb82c0f731f122d103e98e4442c14ce7bf8018d8d677,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765246726898352510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e1bacf59823142e176506131e39c07,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:823c02139d722aeeb211b14581e8aa5f8644ac71dc817330ea18d811ea6d2be9,PodSandboxId:c29d3b18a6a3610837f59130f5440c294d48219315b97d32918d55c95bc57db8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,St
ate:CONTAINER_EXITED,CreatedAt:1765246726822665281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqmgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7e3ff87-a95f-47b2-8a2b-c259bae12d26,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e9173d6-20e1-4471-9c42-63c8d42325c4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.543426731Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d7ed1ca-f19c-4763-8d67-83addfd57bfa name=/runtime.v1.RuntimeService/Version
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.543503009Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d7ed1ca-f19c-4763-8d67-83addfd57bfa name=/runtime.v1.RuntimeService/Version
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.544917030Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86af3364-25b0-4182-bf41-7e5cc0c61b9e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.545570609Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765246874545541929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:215038,},InodesUsed:&UInt64Value{Value:88,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86af3364-25b0-4182-bf41-7e5cc0c61b9e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.546407898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e08e64e-d56d-4f17-9680-0d8bc79e8e6f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.546482679Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e08e64e-d56d-4f17-9680-0d8bc79e8e6f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.546843307Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:93afd08a59ada97d1a31a37780c4f5983e4a5e3ddb9c08fa3f3d59d42259bbbb,PodSandboxId:2159112d0fa49467af9e1a91bf079d7c7c6002945edcc7b75abdec2da7af7814,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765246865079041234,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 431c06e4-5599-47d4-8f8e-fe047be3b9b9,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef945be98899fa8b02dd19aff3b8c77eb0f0c6ce94c3a69e0652753f1ff55a3,PodSandboxId:e7d154816b9d1fa1ac2af1fd38e335654c107702110bc6d8ec71c7e2051b9b93,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765246863291902987,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-r489h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a98ebf9-c223-495b-9d6b-890b748749e8,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b89d37aeec53b9d6ee80b63c22598071f010cd29423aebcc64906de620467314,PodSandboxId:8bf7207eb36542c95d86af3a2d3d637df56a27722fc7e059e32f9a59816f343a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765246813722508450,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 315ec66e-345d-4c53-a2a6-50f943add31b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14173d291caa4718370d24685ac59608d6db0097c2bb19631282e71389726769,PodSandboxId:c29d3b18a6a3610837f59130f5440c294d48219315b97d32918d55c95bc57db8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765246756144763673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqmgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7e3ff87-a95f-47b2-8a2b-c259bae12d26,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.res
tartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96bf9fc86c3c3ab191e90888c1eca1ed3482b2143a2ff87819c0e3b7b5aa541c,PodSandboxId:36a619ae05fdaec1e59a961632b7b4c39e227dbeb5c5d7fb7ab3e266ef416151,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765246756145489157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-jc7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43eea0bb-be89-4179-aa9e-6c2354730e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527eadb8601da5def17eaa2edb74967c026667ca94cc89d7396a212f7a334be8,PodSandboxId:c8ced35f3f07a378f22ed5f8f42d12c198079192e611afbbcdb40c044f19bfa7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:17652467535
79574512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e8c17203ca10918c9f38c7d0e332c8,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7e187bd9a3a6318fa2d523b92b9013a408f816ace5db8c1c222c7793427524f,PodSandboxId:823ae767de64dda5ab0fd523e009c04adda0bffb5e3d3923a0300aced80dd593,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7b
b6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765246753375190471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c61e57199dc27febe166ded52cc142d7,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5720f563ee852c04aade1c1dcfe0527e5d0b9cafff5f5324ea01a48641c2b879,PodSandboxId:f3953026b79acab4142afb82c0f731f122d103e98e4442c14ce7bf8018d8d677,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b
2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765246753347742444,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e1bacf59823142e176506131e39c07,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74ea4fb7f9b159bbd43d1477cadc8ec4c8fb68dcc7a6dd47eff4f26721c65aa,PodSandboxId:a08cf4f29ba9239e238b67ae93da1ccda30c769bd690fffb
ffd5375cb9f1ea16,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765246753320140788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71e4e0a7d57a210efdc84f9032753d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cfeb83409be8ec96c0b53f9c541a123c60a81424ed708b3
16900cfc6bac7634,PodSandboxId:fbd0c8d71de5a5378582509596f03922f8e7355ba930bdb0dc355697943c8a7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765246750746936761,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a490d4a5-6b55-4d29-b267-b700cba89a87,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e07378628f06fd0471083dd2035277e9a204d73d71caa2149d2520334a5a
8780,PodSandboxId:36a619ae05fdaec1e59a961632b7b4c39e227dbeb5c5d7fb7ab3e266ef416151,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765246728146643003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-jc7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43eea0bb-be89-4179-aa9e-6c2354730e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readines
s-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93888ae1c2ed9e59841961ffbd087abe1b960432ec14bd6ede69fe08b06f6528,PodSandboxId:fbd0c8d71de5a5378582509596f03922f8e7355ba930bdb0dc355697943c8a7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765246727258029938,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a490d4a5-6b55-4d29-b267-b700cba89a87,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b12a876b1607656dcae43a6f788e50cffa4515f9e671385f84c9294e3f8ea253,PodSandboxId:a08cf4f29ba9239e238b67ae93da1ccda30c769bd690fffbffd5375cb9f1ea16,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765246727110935066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71e4e0a7d57a210efdc84f9032753d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,i
o.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacf40dd7a6ed51348aa2ab49f8b52e918b4dfd14fb9632d8e828165a44be415,PodSandboxId:823ae767de64dda5ab0fd523e009c04adda0bffb5e3d3923a0300aced80dd593,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765246726970198747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: c61e57199dc27febe166ded52cc142d7,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4564cf1f6c21fbf6c388bc6c5703fdc330467e6a7b0e87256575ffbb8496510c,PodSandboxId:f3953026b79acab4142afb82c0f731f122d103e98e4442c14ce7bf8018d8d677,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765246726898352510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e1bacf59823142e176506131e39c07,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:823c02139d722aeeb211b14581e8aa5f8644ac71dc817330ea18d811ea6d2be9,PodSandboxId:c29d3b18a6a3610837f59130f5440c294d48219315b97d32918d55c95bc57db8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,St
ate:CONTAINER_EXITED,CreatedAt:1765246726822665281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqmgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7e3ff87-a95f-47b2-8a2b-c259bae12d26,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1e08e64e-d56d-4f17-9680-0d8bc79e8e6f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.579942750Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=772c7588-9408-4160-8897-84ca9eb8d03d name=/runtime.v1.RuntimeService/Version
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.580119276Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=772c7588-9408-4160-8897-84ca9eb8d03d name=/runtime.v1.RuntimeService/Version
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.582137071Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b8990ea-27a0-47a7-987c-9182963bb4e6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.582735429Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765246874582710036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:215038,},InodesUsed:&UInt64Value{Value:88,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b8990ea-27a0-47a7-987c-9182963bb4e6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.584131171Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0872415-2894-4804-8dd5-9bddff98e80c name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.584212000Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0872415-2894-4804-8dd5-9bddff98e80c name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:21:14 functional-074400 crio[5283]: time="2025-12-09 02:21:14.584616743Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:93afd08a59ada97d1a31a37780c4f5983e4a5e3ddb9c08fa3f3d59d42259bbbb,PodSandboxId:2159112d0fa49467af9e1a91bf079d7c7c6002945edcc7b75abdec2da7af7814,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765246865079041234,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 431c06e4-5599-47d4-8f8e-fe047be3b9b9,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef945be98899fa8b02dd19aff3b8c77eb0f0c6ce94c3a69e0652753f1ff55a3,PodSandboxId:e7d154816b9d1fa1ac2af1fd38e335654c107702110bc6d8ec71c7e2051b9b93,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765246863291902987,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-r489h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a98ebf9-c223-495b-9d6b-890b748749e8,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b89d37aeec53b9d6ee80b63c22598071f010cd29423aebcc64906de620467314,PodSandboxId:8bf7207eb36542c95d86af3a2d3d637df56a27722fc7e059e32f9a59816f343a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765246813722508450,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 315ec66e-345d-4c53-a2a6-50f943add31b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14173d291caa4718370d24685ac59608d6db0097c2bb19631282e71389726769,PodSandboxId:c29d3b18a6a3610837f59130f5440c294d48219315b97d32918d55c95bc57db8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765246756144763673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqmgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7e3ff87-a95f-47b2-8a2b-c259bae12d26,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.res
tartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96bf9fc86c3c3ab191e90888c1eca1ed3482b2143a2ff87819c0e3b7b5aa541c,PodSandboxId:36a619ae05fdaec1e59a961632b7b4c39e227dbeb5c5d7fb7ab3e266ef416151,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765246756145489157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-jc7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43eea0bb-be89-4179-aa9e-6c2354730e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527eadb8601da5def17eaa2edb74967c026667ca94cc89d7396a212f7a334be8,PodSandboxId:c8ced35f3f07a378f22ed5f8f42d12c198079192e611afbbcdb40c044f19bfa7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:17652467535
79574512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e8c17203ca10918c9f38c7d0e332c8,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7e187bd9a3a6318fa2d523b92b9013a408f816ace5db8c1c222c7793427524f,PodSandboxId:823ae767de64dda5ab0fd523e009c04adda0bffb5e3d3923a0300aced80dd593,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7b
b6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765246753375190471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c61e57199dc27febe166ded52cc142d7,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5720f563ee852c04aade1c1dcfe0527e5d0b9cafff5f5324ea01a48641c2b879,PodSandboxId:f3953026b79acab4142afb82c0f731f122d103e98e4442c14ce7bf8018d8d677,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b
2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765246753347742444,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e1bacf59823142e176506131e39c07,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74ea4fb7f9b159bbd43d1477cadc8ec4c8fb68dcc7a6dd47eff4f26721c65aa,PodSandboxId:a08cf4f29ba9239e238b67ae93da1ccda30c769bd690fffb
ffd5375cb9f1ea16,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765246753320140788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71e4e0a7d57a210efdc84f9032753d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cfeb83409be8ec96c0b53f9c541a123c60a81424ed708b3
16900cfc6bac7634,PodSandboxId:fbd0c8d71de5a5378582509596f03922f8e7355ba930bdb0dc355697943c8a7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765246750746936761,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a490d4a5-6b55-4d29-b267-b700cba89a87,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e07378628f06fd0471083dd2035277e9a204d73d71caa2149d2520334a5a
8780,PodSandboxId:36a619ae05fdaec1e59a961632b7b4c39e227dbeb5c5d7fb7ab3e266ef416151,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765246728146643003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-jc7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43eea0bb-be89-4179-aa9e-6c2354730e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readines
s-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93888ae1c2ed9e59841961ffbd087abe1b960432ec14bd6ede69fe08b06f6528,PodSandboxId:fbd0c8d71de5a5378582509596f03922f8e7355ba930bdb0dc355697943c8a7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765246727258029938,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a490d4a5-6b55-4d29-b267-b700cba89a87,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b12a876b1607656dcae43a6f788e50cffa4515f9e671385f84c9294e3f8ea253,PodSandboxId:a08cf4f29ba9239e238b67ae93da1ccda30c769bd690fffbffd5375cb9f1ea16,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765246727110935066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71e4e0a7d57a210efdc84f9032753d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,i
o.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacf40dd7a6ed51348aa2ab49f8b52e918b4dfd14fb9632d8e828165a44be415,PodSandboxId:823ae767de64dda5ab0fd523e009c04adda0bffb5e3d3923a0300aced80dd593,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765246726970198747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: c61e57199dc27febe166ded52cc142d7,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4564cf1f6c21fbf6c388bc6c5703fdc330467e6a7b0e87256575ffbb8496510c,PodSandboxId:f3953026b79acab4142afb82c0f731f122d103e98e4442c14ce7bf8018d8d677,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765246726898352510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e1bacf59823142e176506131e39c07,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:823c02139d722aeeb211b14581e8aa5f8644ac71dc817330ea18d811ea6d2be9,PodSandboxId:c29d3b18a6a3610837f59130f5440c294d48219315b97d32918d55c95bc57db8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,St
ate:CONTAINER_EXITED,CreatedAt:1765246726822665281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqmgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7e3ff87-a95f-47b2-8a2b-c259bae12d26,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e0872415-2894-4804-8dd5-9bddff98e80c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                         CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	93afd08a59ada       d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9                                              9 seconds ago        Running             myfrontend                0                   2159112d0fa49       sp-pod                                      default
	fef945be98899       public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036   11 seconds ago       Running             mysql                     0                   e7d154816b9d1       mysql-7d7b65bc95-r489h                      default
	b89d37aeec53b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e           About a minute ago   Exited              mount-munger              0                   8bf7207eb3654       busybox-mount                               default
	96bf9fc86c3c3       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                              About a minute ago   Running             coredns                   3                   36a619ae05fda       coredns-7d764666f9-jc7zv                    kube-system
	14173d291caa4       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                              About a minute ago   Running             kube-proxy                3                   c29d3b18a6a36       kube-proxy-kqmgp                            kube-system
	527eadb8601da       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                              2 minutes ago        Running             kube-apiserver            0                   c8ced35f3f07a       kube-apiserver-functional-074400            kube-system
	d7e187bd9a3a6       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                              2 minutes ago        Running             kube-scheduler            3                   823ae767de64d       kube-scheduler-functional-074400            kube-system
	5720f563ee852       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                              2 minutes ago        Running             kube-controller-manager   3                   f3953026b79ac       kube-controller-manager-functional-074400   kube-system
	a74ea4fb7f9b1       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              2 minutes ago        Running             etcd                      3                   a08cf4f29ba92       etcd-functional-074400                      kube-system
	5cfeb83409be8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              2 minutes ago        Running             storage-provisioner       4                   fbd0c8d71de5a       storage-provisioner                         kube-system
	e07378628f06f       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                              2 minutes ago        Exited              coredns                   2                   36a619ae05fda       coredns-7d764666f9-jc7zv                    kube-system
	93888ae1c2ed9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              2 minutes ago        Exited              storage-provisioner       3                   fbd0c8d71de5a       storage-provisioner                         kube-system
	b12a876b16076       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              2 minutes ago        Exited              etcd                      2                   a08cf4f29ba92       etcd-functional-074400                      kube-system
	bacf40dd7a6ed       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                              2 minutes ago        Exited              kube-scheduler            2                   823ae767de64d       kube-scheduler-functional-074400            kube-system
	4564cf1f6c21f       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                              2 minutes ago        Exited              kube-controller-manager   2                   f3953026b79ac       kube-controller-manager-functional-074400   kube-system
	823c02139d722       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                              2 minutes ago        Exited              kube-proxy                2                   c29d3b18a6a36       kube-proxy-kqmgp                            kube-system
	
	
	==> coredns [96bf9fc86c3c3ab191e90888c1eca1ed3482b2143a2ff87819c0e3b7b5aa541c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:60550 - 1226 "HINFO IN 720540791316241156.5827029367603844758. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.024036833s
	
	
	==> coredns [e07378628f06fd0471083dd2035277e9a204d73d71caa2149d2520334a5a8780] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:51973 - 8162 "HINFO IN 7492499431195076065.3095992251303409408. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033376104s
	
	
	==> describe nodes <==
	Name:               functional-074400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-074400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=functional-074400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_17_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:17:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-074400
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:21:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:20:46 +0000   Tue, 09 Dec 2025 02:17:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:20:46 +0000   Tue, 09 Dec 2025 02:17:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:20:46 +0000   Tue, 09 Dec 2025 02:17:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 02:20:46 +0000   Tue, 09 Dec 2025 02:17:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.13
	  Hostname:    functional-074400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 db00c3ccb5e94810b925118d8c6c365e
	  System UUID:                db00c3cc-b5e9-4810-b925-118d8c6c365e
	  Boot ID:                    dda03a86-a51a-4ba3-a580-4d8b50831a16
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-qkj2j                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  default                     hello-node-connect-9f67c86d4-zbhnt            0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  default                     mysql-7d7b65bc95-r489h                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    55s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-jc7zv                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     3m56s
	  kube-system                 etcd-functional-074400                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m
	  kube-system                 kube-apiserver-functional-074400              250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-functional-074400     200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 kube-proxy-kqmgp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 kube-scheduler-functional-074400              100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-2cgfd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-j58f5          0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                Age    From             Message
	  ----    ------                ----   ----             -------
	  Normal  RegisteredNode        3m57s  node-controller  Node functional-074400 event: Registered Node functional-074400 in Controller
	  Normal  CIDRAssignmentFailed  3m57s  cidrAllocator    Node functional-074400 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode        3m2s   node-controller  Node functional-074400 event: Registered Node functional-074400 in Controller
	  Normal  RegisteredNode        2m21s  node-controller  Node functional-074400 event: Registered Node functional-074400 in Controller
	  Normal  RegisteredNode        116s   node-controller  Node functional-074400 event: Registered Node functional-074400 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084620] kauditd_printk_skb: 1 callbacks suppressed
	[Dec 9 02:17] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.141349] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.000029] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.639682] kauditd_printk_skb: 252 callbacks suppressed
	[ +25.087039] kauditd_printk_skb: 38 callbacks suppressed
	[Dec 9 02:18] kauditd_printk_skb: 11 callbacks suppressed
	[  +1.077221] kauditd_printk_skb: 183 callbacks suppressed
	[  +0.969669] kauditd_printk_skb: 185 callbacks suppressed
	[  +0.117305] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.675621] kauditd_printk_skb: 78 callbacks suppressed
	[  +6.655370] kauditd_printk_skb: 253 callbacks suppressed
	[Dec 9 02:19] kauditd_printk_skb: 19 callbacks suppressed
	[  +0.342083] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.131658] kauditd_printk_skb: 8 callbacks suppressed
	[  +4.871040] kauditd_printk_skb: 129 callbacks suppressed
	[  +0.098874] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.134598] kauditd_printk_skb: 74 callbacks suppressed
	[Dec 9 02:20] kauditd_printk_skb: 74 callbacks suppressed
	[  +3.625583] kauditd_printk_skb: 31 callbacks suppressed
	[ +25.077425] kauditd_printk_skb: 38 callbacks suppressed
	[  +8.783835] kauditd_printk_skb: 11 callbacks suppressed
	[Dec 9 02:21] kauditd_printk_skb: 8 callbacks suppressed
	[  +8.594909] kauditd_printk_skb: 64 callbacks suppressed
	
	
	==> etcd [a74ea4fb7f9b159bbd43d1477cadc8ec4c8fb68dcc7a6dd47eff4f26721c65aa] <==
	{"level":"info","ts":"2025-12-09T02:20:58.203460Z","caller":"traceutil/trace.go:172","msg":"trace[1003427352] transaction","detail":"{read_only:false; response_revision:846; number_of_response:1; }","duration":"331.917214ms","start":"2025-12-09T02:20:57.871532Z","end":"2025-12-09T02:20:58.203449Z","steps":["trace[1003427352] 'process raft request'  (duration: 331.622875ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:20:58.206165Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.663907ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:20:58.207909Z","caller":"traceutil/trace.go:172","msg":"trace[2066851030] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:846; }","duration":"143.885219ms","start":"2025-12-09T02:20:58.064012Z","end":"2025-12-09T02:20:58.207897Z","steps":["trace[2066851030] 'agreement among raft nodes before linearized reading'  (duration: 140.637695ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:20:58.207606Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T02:20:57.871508Z","time spent":"335.887273ms","remote":"127.0.0.1:37214","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-svgerveyegzzmwaexgf6tstxpq\" mod_revision:829 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-svgerveyegzzmwaexgf6tstxpq\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-svgerveyegzzmwaexgf6tstxpq\" > >"}
	{"level":"warn","ts":"2025-12-09T02:20:58.206529Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.832277ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:20:58.208922Z","caller":"traceutil/trace.go:172","msg":"trace[526939693] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:846; }","duration":"128.240337ms","start":"2025-12-09T02:20:58.080671Z","end":"2025-12-09T02:20:58.208912Z","steps":["trace[526939693] 'agreement among raft nodes before linearized reading'  (duration: 125.803196ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:21:01.537930Z","caller":"traceutil/trace.go:172","msg":"trace[1838618841] linearizableReadLoop","detail":"{readStateIndex:949; appliedIndex:949; }","duration":"474.214732ms","start":"2025-12-09T02:21:01.063699Z","end":"2025-12-09T02:21:01.537914Z","steps":["trace[1838618841] 'read index received'  (duration: 474.208744ms)","trace[1838618841] 'applied index is now lower than readState.Index'  (duration: 5.143µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-09T02:21:01.538134Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"474.487407ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:21:01.538156Z","caller":"traceutil/trace.go:172","msg":"trace[1753031098] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:847; }","duration":"474.522854ms","start":"2025-12-09T02:21:01.063627Z","end":"2025-12-09T02:21:01.538150Z","steps":["trace[1753031098] 'agreement among raft nodes before linearized reading'  (duration: 474.401975ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:21:01.538205Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T02:21:01.063605Z","time spent":"474.592817ms","remote":"127.0.0.1:37060","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-12-09T02:21:01.538287Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"456.892171ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:21:01.538322Z","caller":"traceutil/trace.go:172","msg":"trace[404851745] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:848; }","duration":"456.933995ms","start":"2025-12-09T02:21:01.081381Z","end":"2025-12-09T02:21:01.538315Z","steps":["trace[404851745] 'agreement among raft nodes before linearized reading'  (duration: 456.877994ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:21:01.538341Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T02:21:01.081362Z","time spent":"456.975262ms","remote":"127.0.0.1:37060","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-12-09T02:21:01.538428Z","caller":"traceutil/trace.go:172","msg":"trace[1888258504] transaction","detail":"{read_only:false; response_revision:848; number_of_response:1; }","duration":"600.591481ms","start":"2025-12-09T02:21:00.937830Z","end":"2025-12-09T02:21:01.538422Z","steps":["trace[1888258504] 'process raft request'  (duration: 600.281604ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:21:01.538486Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T02:21:00.937807Z","time spent":"600.639337ms","remote":"127.0.0.1:37012","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:847 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-12-09T02:21:01.538491Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"354.244835ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:21:01.538506Z","caller":"traceutil/trace.go:172","msg":"trace[1043831555] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:848; }","duration":"354.260366ms","start":"2025-12-09T02:21:01.184241Z","end":"2025-12-09T02:21:01.538502Z","steps":["trace[1043831555] 'agreement among raft nodes before linearized reading'  (duration: 354.236536ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:21:01.538946Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"388.934728ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:21:01.538996Z","caller":"traceutil/trace.go:172","msg":"trace[6101682] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:848; }","duration":"389.142667ms","start":"2025-12-09T02:21:01.149848Z","end":"2025-12-09T02:21:01.538991Z","steps":["trace[6101682] 'agreement among raft nodes before linearized reading'  (duration: 388.921164ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:21:01.539025Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T02:21:01.149736Z","time spent":"389.275312ms","remote":"127.0.0.1:36722","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-12-09T02:21:11.978710Z","caller":"traceutil/trace.go:172","msg":"trace[1856364477] linearizableReadLoop","detail":"{readStateIndex:979; appliedIndex:979; }","duration":"264.374425ms","start":"2025-12-09T02:21:11.714317Z","end":"2025-12-09T02:21:11.978691Z","steps":["trace[1856364477] 'read index received'  (duration: 264.369541ms)","trace[1856364477] 'applied index is now lower than readState.Index'  (duration: 4.202µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-09T02:21:11.979005Z","caller":"traceutil/trace.go:172","msg":"trace[1327415031] transaction","detail":"{read_only:false; response_revision:875; number_of_response:1; }","duration":"356.708174ms","start":"2025-12-09T02:21:11.622282Z","end":"2025-12-09T02:21:11.978990Z","steps":["trace[1327415031] 'process raft request'  (duration: 356.502466ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:21:11.979254Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T02:21:11.622256Z","time spent":"356.919969ms","remote":"127.0.0.1:37012","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:874 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-12-09T02:21:11.980256Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"264.978502ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:21:11.980786Z","caller":"traceutil/trace.go:172","msg":"trace[2097505626] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:875; }","duration":"266.222384ms","start":"2025-12-09T02:21:11.714313Z","end":"2025-12-09T02:21:11.980535Z","steps":["trace[2097505626] 'agreement among raft nodes before linearized reading'  (duration: 264.957158ms)"],"step_count":1}
	
	
	==> etcd [b12a876b1607656dcae43a6f788e50cffa4515f9e671385f84c9294e3f8ea253] <==
	{"level":"warn","ts":"2025-12-09T02:18:49.795725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:18:49.816240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:18:49.829924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:18:49.837887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:18:49.845703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:18:49.854827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:18:49.901744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37072","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-09T02:18:54.210475Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-09T02:18:54.210542Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-074400","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.13:2380"],"advertise-client-urls":["https://192.168.39.13:2379"]}
	{"level":"error","ts":"2025-12-09T02:18:54.212446Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-09T02:19:01.212667Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-09T02:19:01.213924Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-09T02:19:01.214248Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-09T02:19:01.214606Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-09T02:19:01.214721Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-09T02:19:01.214406Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.13:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-09T02:19:01.214761Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.13:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-09T02:19:01.214844Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.13:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T02:19:01.214433Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1d3fba3e6c6ecbcd","current-leader-member-id":"1d3fba3e6c6ecbcd"}
	{"level":"info","ts":"2025-12-09T02:19:01.214963Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-09T02:19:01.214986Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-09T02:19:01.221888Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.13:2380"}
	{"level":"error","ts":"2025-12-09T02:19:01.221977Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.13:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T02:19:01.222007Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.13:2380"}
	{"level":"info","ts":"2025-12-09T02:19:01.222014Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-074400","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.13:2380"],"advertise-client-urls":["https://192.168.39.13:2379"]}
	
	
	==> kernel <==
	 02:21:15 up 4 min,  0 users,  load average: 2.29, 1.11, 0.48
	Linux functional-074400 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [527eadb8601da5def17eaa2edb74967c026667ca94cc89d7396a212f7a334be8] <==
	I1209 02:19:15.653378       1 cache.go:39] Caches are synced for autoregister controller
	I1209 02:19:15.653467       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:15.653479       1 policy_source.go:248] refreshing policies
	I1209 02:19:15.681401       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 02:19:15.896322       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 02:19:16.416999       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1209 02:19:17.193048       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 02:19:17.241707       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1209 02:19:17.277368       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:19:17.286105       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:19:22.288967       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 02:19:22.299738       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 02:19:22.301479       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 02:19:33.939396       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.109.174.82"}
	I1209 02:19:40.704502       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.58.57"}
	I1209 02:19:46.055771       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.104.66"}
	I1209 02:20:19.017969       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.207.80"}
	E1209 02:20:54.883855       1 conn.go:339] Error on socket receive: read tcp 192.168.39.13:8441->192.168.39.1:39546: use of closed network connection
	E1209 02:21:11.311679       1 conn.go:339] Error on socket receive: read tcp 192.168.39.13:8441->192.168.39.1:55936: use of closed network connection
	E1209 02:21:11.459426       1 conn.go:339] Error on socket receive: read tcp 192.168.39.13:8441->192.168.39.1:55958: use of closed network connection
	E1209 02:21:12.271547       1 conn.go:339] Error on socket receive: read tcp 192.168.39.13:8441->192.168.39.1:55998: use of closed network connection
	I1209 02:21:12.974659       1 controller.go:667] quota admission added evaluator for: namespaces
	I1209 02:21:13.362332       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.92.119"}
	I1209 02:21:13.396332       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.88.83"}
	E1209 02:21:14.318038       1 conn.go:339] Error on socket receive: read tcp 192.168.39.13:8441->192.168.39.1:38120: use of closed network connection
	
	
	==> kube-controller-manager [4564cf1f6c21fbf6c388bc6c5703fdc330467e6a7b0e87256575ffbb8496510c] <==
	I1209 02:18:53.759401       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.759504       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.759565       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.761008       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.761195       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.761278       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.761339       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.761405       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.761835       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.761889       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.762212       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.762287       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.762362       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.763758       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.767043       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.768526       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.768854       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.769014       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.769302       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.773628       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.835652       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.835673       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1209 02:18:53.835677       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1209 02:18:53.850356       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:54.024457       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-controller-manager [5720f563ee852c04aade1c1dcfe0527e5d0b9cafff5f5324ea01a48641c2b879] <==
	I1209 02:19:18.782519       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:18.782651       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:18.782851       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:18.786354       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1209 02:19:18.786380       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:19:18.786386       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:18.786453       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:18.786695       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:18.788109       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-074400"
	I1209 02:19:18.788161       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1209 02:19:18.788214       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:18.788232       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:18.800647       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:19:18.802843       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:18.885650       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:18.885669       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1209 02:19:18.885673       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1209 02:19:18.901262       1 shared_informer.go:377] "Caches are synced"
	E1209 02:21:13.116722       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:21:13.123459       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:21:13.129856       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:21:13.142405       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:21:13.160655       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:21:13.181089       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:21:13.186394       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [14173d291caa4718370d24685ac59608d6db0097c2bb19631282e71389726769] <==
	I1209 02:19:16.376344       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:19:16.477454       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:16.478153       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.13"]
	E1209 02:19:16.478347       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:19:16.560522       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1209 02:19:16.560595       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 02:19:16.560616       1 server_linux.go:136] "Using iptables Proxier"
	I1209 02:19:16.584037       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:19:16.584852       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1209 02:19:16.584967       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:19:16.594721       1 config.go:200] "Starting service config controller"
	I1209 02:19:16.594848       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:19:16.595293       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:19:16.595401       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:19:16.595642       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:19:16.595666       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:19:16.599361       1 config.go:309] "Starting node config controller"
	I1209 02:19:16.599484       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:19:16.599506       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:19:16.695426       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 02:19:16.696148       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 02:19:16.696170       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [823c02139d722aeeb211b14581e8aa5f8644ac71dc817330ea18d811ea6d2be9] <==
	E1209 02:18:50.598930       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes \"functional-074400\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]"
	I1209 02:18:50.623250       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:50.623366       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.13"]
	E1209 02:18:50.623546       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:18:50.691204       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1209 02:18:50.691370       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 02:18:50.691447       1 server_linux.go:136] "Using iptables Proxier"
	I1209 02:18:50.707836       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:18:50.708643       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1209 02:18:50.708664       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:18:50.711481       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:18:50.711590       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:18:50.716157       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:18:50.716175       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:18:50.718528       1 config.go:200] "Starting service config controller"
	I1209 02:18:50.718557       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:18:50.719218       1 config.go:309] "Starting node config controller"
	I1209 02:18:50.720198       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:18:50.720323       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:18:50.811923       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1209 02:18:50.816271       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 02:18:50.819646       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [bacf40dd7a6ed51348aa2ab49f8b52e918b4dfd14fb9632d8e828165a44be415] <==
	I1209 02:18:48.669545       1 serving.go:386] Generated self-signed cert in-memory
	W1209 02:18:50.520599       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 02:18:50.521792       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 02:18:50.521850       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 02:18:50.521868       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 02:18:50.585519       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1209 02:18:50.585561       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:18:50.589889       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:18:50.589966       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:18:50.592258       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 02:18:50.592353       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 02:18:50.700431       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:01.312948       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1209 02:19:01.313214       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1209 02:19:01.313232       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1209 02:19:01.313331       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:19:01.313500       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1209 02:19:01.313516       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d7e187bd9a3a6318fa2d523b92b9013a408f816ace5db8c1c222c7793427524f] <==
	E1209 02:19:15.548746       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1209 02:19:15.548770       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1209 02:19:15.546661       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1209 02:19:15.548799       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1209 02:19:15.552362       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1209 02:19:15.555476       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1209 02:19:15.564497       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1209 02:19:15.564624       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1209 02:19:15.564620       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1209 02:19:15.564752       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1209 02:19:15.564840       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1209 02:19:15.564884       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1209 02:19:15.564936       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1209 02:19:15.564975       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1209 02:19:15.565195       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1209 02:19:15.565232       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1209 02:19:15.565344       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1209 02:19:15.565387       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1209 02:19:15.569414       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1209 02:19:15.572307       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1209 02:19:15.572574       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1209 02:19:15.572705       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1209 02:19:15.572795       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1209 02:19:15.572832       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	I1209 02:19:16.825606       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 09 02:20:53 functional-074400 kubelet[6677]: E1209 02:20:53.015413    6677 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765246853014853485  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:194420}  inodes_used:{value:83}}"
	Dec 09 02:20:54 functional-074400 kubelet[6677]: I1209 02:20:54.926603    6677 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=7.7651104140000005 podStartE2EDuration="1m6.926046707s" podCreationTimestamp="2025-12-09 02:19:48 +0000 UTC" firstStartedPulling="2025-12-09 02:19:48.942554735 +0000 UTC m=+36.231844636" lastFinishedPulling="2025-12-09 02:20:48.103491025 +0000 UTC m=+95.392780929" observedRunningTime="2025-12-09 02:20:48.585897539 +0000 UTC m=+95.875187439" watchObservedRunningTime="2025-12-09 02:20:54.926046707 +0000 UTC m=+102.215336616"
	Dec 09 02:20:57 functional-074400 kubelet[6677]: E1209 02:20:57.885839    6677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-074400" containerName="kube-controller-manager"
	Dec 09 02:20:58 functional-074400 kubelet[6677]: I1209 02:20:58.048039    6677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e59e1437920a4406b3a8fe85285ddd011416fb2bae902d474bbde7367c9b07a8"
	Dec 09 02:21:03 functional-074400 kubelet[6677]: E1209 02:21:03.019222    6677 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765246863017779079  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:194420}  inodes_used:{value:83}}"
	Dec 09 02:21:03 functional-074400 kubelet[6677]: E1209 02:21:03.019360    6677 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765246863017779079  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:194420}  inodes_used:{value:83}}"
	Dec 09 02:21:03 functional-074400 kubelet[6677]: I1209 02:21:03.323738    6677 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/0a7860b8-9827-416f-b125-83a9a78d15d3-kube-api-access-2zxs7\" (UniqueName: \"kubernetes.io/projected/0a7860b8-9827-416f-b125-83a9a78d15d3-kube-api-access-2zxs7\") pod \"0a7860b8-9827-416f-b125-83a9a78d15d3\" (UID: \"0a7860b8-9827-416f-b125-83a9a78d15d3\") "
	Dec 09 02:21:03 functional-074400 kubelet[6677]: I1209 02:21:03.325187    6677 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/0a7860b8-9827-416f-b125-83a9a78d15d3-pvc-9ec28d63-d986-4452-b4e2-8a019be6ee62\" (UniqueName: \"kubernetes.io/host-path/0a7860b8-9827-416f-b125-83a9a78d15d3-pvc-9ec28d63-d986-4452-b4e2-8a019be6ee62\") pod \"0a7860b8-9827-416f-b125-83a9a78d15d3\" (UID: \"0a7860b8-9827-416f-b125-83a9a78d15d3\") "
	Dec 09 02:21:03 functional-074400 kubelet[6677]: I1209 02:21:03.325287    6677 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a7860b8-9827-416f-b125-83a9a78d15d3-pvc-9ec28d63-d986-4452-b4e2-8a019be6ee62" pod "0a7860b8-9827-416f-b125-83a9a78d15d3" (UID: "0a7860b8-9827-416f-b125-83a9a78d15d3"). InnerVolumeSpecName "pvc-9ec28d63-d986-4452-b4e2-8a019be6ee62". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 09 02:21:03 functional-074400 kubelet[6677]: I1209 02:21:03.332514    6677 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7860b8-9827-416f-b125-83a9a78d15d3-kube-api-access-2zxs7" pod "0a7860b8-9827-416f-b125-83a9a78d15d3" (UID: "0a7860b8-9827-416f-b125-83a9a78d15d3"). InnerVolumeSpecName "kube-api-access-2zxs7". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 09 02:21:03 functional-074400 kubelet[6677]: I1209 02:21:03.425414    6677 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2zxs7\" (UniqueName: \"kubernetes.io/projected/0a7860b8-9827-416f-b125-83a9a78d15d3-kube-api-access-2zxs7\") on node \"functional-074400\" DevicePath \"\""
	Dec 09 02:21:03 functional-074400 kubelet[6677]: I1209 02:21:03.425445    6677 reconciler_common.go:299] "Volume detached for volume \"pvc-9ec28d63-d986-4452-b4e2-8a019be6ee62\" (UniqueName: \"kubernetes.io/host-path/0a7860b8-9827-416f-b125-83a9a78d15d3-pvc-9ec28d63-d986-4452-b4e2-8a019be6ee62\") on node \"functional-074400\" DevicePath \"\""
	Dec 09 02:21:04 functional-074400 kubelet[6677]: I1209 02:21:04.150460    6677 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/mysql-7d7b65bc95-r489h" podStartSLOduration=1.550461646 podStartE2EDuration="45.150438012s" podCreationTimestamp="2025-12-09 02:20:19 +0000 UTC" firstStartedPulling="2025-12-09 02:20:19.66974415 +0000 UTC m=+66.959034051" lastFinishedPulling="2025-12-09 02:21:03.269720527 +0000 UTC m=+110.559010417" observedRunningTime="2025-12-09 02:21:04.114768533 +0000 UTC m=+111.404058442" watchObservedRunningTime="2025-12-09 02:21:04.150438012 +0000 UTC m=+111.439727946"
	Dec 09 02:21:04 functional-074400 kubelet[6677]: I1209 02:21:04.434105    6677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-647jr\" (UniqueName: \"kubernetes.io/projected/431c06e4-5599-47d4-8f8e-fe047be3b9b9-kube-api-access-647jr\") pod \"sp-pod\" (UID: \"431c06e4-5599-47d4-8f8e-fe047be3b9b9\") " pod="default/sp-pod"
	Dec 09 02:21:04 functional-074400 kubelet[6677]: I1209 02:21:04.434176    6677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9ec28d63-d986-4452-b4e2-8a019be6ee62\" (UniqueName: \"kubernetes.io/host-path/431c06e4-5599-47d4-8f8e-fe047be3b9b9-pvc-9ec28d63-d986-4452-b4e2-8a019be6ee62\") pod \"sp-pod\" (UID: \"431c06e4-5599-47d4-8f8e-fe047be3b9b9\") " pod="default/sp-pod"
	Dec 09 02:21:04 functional-074400 kubelet[6677]: I1209 02:21:04.890517    6677 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0a7860b8-9827-416f-b125-83a9a78d15d3" path="/var/lib/kubelet/pods/0a7860b8-9827-416f-b125-83a9a78d15d3/volumes"
	Dec 09 02:21:12 functional-074400 kubelet[6677]: E1209 02:21:12.269307    6677 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:38450->127.0.0.1:44871: write tcp 127.0.0.1:38450->127.0.0.1:44871: write: broken pipe
	Dec 09 02:21:13 functional-074400 kubelet[6677]: E1209 02:21:13.023423    6677 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765246873022734200  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:215038}  inodes_used:{value:88}}"
	Dec 09 02:21:13 functional-074400 kubelet[6677]: E1209 02:21:13.023444    6677 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765246873022734200  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:215038}  inodes_used:{value:88}}"
	Dec 09 02:21:13 functional-074400 kubelet[6677]: I1209 02:21:13.228302    6677 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=9.228286178 podStartE2EDuration="9.228286178s" podCreationTimestamp="2025-12-09 02:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 02:21:06.145024581 +0000 UTC m=+113.434314490" watchObservedRunningTime="2025-12-09 02:21:13.228286178 +0000 UTC m=+120.517576087"
	Dec 09 02:21:13 functional-074400 kubelet[6677]: I1209 02:21:13.306044    6677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0206c175-f020-4297-b789-61d4c53145d9-tmp-volume\") pod \"dashboard-metrics-scraper-5565989548-2cgfd\" (UID: \"0206c175-f020-4297-b789-61d4c53145d9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-2cgfd"
	Dec 09 02:21:13 functional-074400 kubelet[6677]: I1209 02:21:13.306152    6677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/51aeba10-230e-4e7c-9eed-7e81b00f9578-tmp-volume\") pod \"kubernetes-dashboard-b84665fb8-j58f5\" (UID: \"51aeba10-230e-4e7c-9eed-7e81b00f9578\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-j58f5"
	Dec 09 02:21:13 functional-074400 kubelet[6677]: I1209 02:21:13.306176    6677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vmnr\" (UniqueName: \"kubernetes.io/projected/0206c175-f020-4297-b789-61d4c53145d9-kube-api-access-5vmnr\") pod \"dashboard-metrics-scraper-5565989548-2cgfd\" (UID: \"0206c175-f020-4297-b789-61d4c53145d9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-2cgfd"
	Dec 09 02:21:13 functional-074400 kubelet[6677]: I1209 02:21:13.306192    6677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jsb2\" (UniqueName: \"kubernetes.io/projected/51aeba10-230e-4e7c-9eed-7e81b00f9578-kube-api-access-4jsb2\") pod \"kubernetes-dashboard-b84665fb8-j58f5\" (UID: \"51aeba10-230e-4e7c-9eed-7e81b00f9578\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-j58f5"
	Dec 09 02:21:13 functional-074400 kubelet[6677]: I1209 02:21:13.720520    6677 scope.go:122] "RemoveContainer" containerID="50b2a2480582dd37a8e6da7688457ff8262f034078e93071d170ae77c553afe1"
	
	
	==> storage-provisioner [5cfeb83409be8ec96c0b53f9c541a123c60a81424ed708b316900cfc6bac7634] <==
	W1209 02:20:48.186509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:20:50.197193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:20:50.221389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:20:52.235717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:20:52.256574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:20:54.264254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:20:54.621805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:20:56.627525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:20:56.904413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:20:58.914027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:20:58.925789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:21:00.931356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:21:01.541481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:21:03.545669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:21:03.560568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:21:05.565332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:21:05.573288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:21:07.578274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:21:07.584740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:21:09.589572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:21:09.607710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:21:11.618433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:21:11.982248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:21:13.992713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:21:14.009566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [93888ae1c2ed9e59841961ffbd087abe1b960432ec14bd6ede69fe08b06f6528] <==
	I1209 02:18:47.839420       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1209 02:18:47.847561       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-074400 -n functional-074400
helpers_test.go:269: (dbg) Run:  kubectl --context functional-074400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-qkj2j hello-node-connect-9f67c86d4-zbhnt dashboard-metrics-scraper-5565989548-2cgfd kubernetes-dashboard-b84665fb8-j58f5
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-074400 describe pod busybox-mount hello-node-5758569b79-qkj2j hello-node-connect-9f67c86d4-zbhnt dashboard-metrics-scraper-5565989548-2cgfd kubernetes-dashboard-b84665fb8-j58f5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-074400 describe pod busybox-mount hello-node-5758569b79-qkj2j hello-node-connect-9f67c86d4-zbhnt dashboard-metrics-scraper-5565989548-2cgfd kubernetes-dashboard-b84665fb8-j58f5: exit status 1 (125.494661ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074400/192.168.39.13
	Start Time:       Tue, 09 Dec 2025 02:19:41 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://b89d37aeec53b9d6ee80b63c22598071f010cd29423aebcc64906de620467314
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 09 Dec 2025 02:20:13 +0000
	      Finished:     Tue, 09 Dec 2025 02:20:13 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tm5jg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-tm5jg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  95s   default-scheduler  Successfully assigned default/busybox-mount to functional-074400
	  Normal  Pulling    94s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     63s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.253s (31.491s including waiting). Image size: 4631262 bytes.
	  Normal  Created    63s   kubelet            Container created
	  Normal  Started    63s   kubelet            Container started
	
	
	Name:             hello-node-5758569b79-qkj2j
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074400/192.168.39.13
	Start Time:       Tue, 09 Dec 2025 02:19:46 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p4hc6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-p4hc6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  90s                default-scheduler  Successfully assigned default/hello-node-5758569b79-qkj2j to functional-074400
	  Warning  Failed     33s                kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     33s                kubelet            Error: ErrImagePull
	  Normal   BackOff    32s                kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     32s                kubelet            Error: ImagePullBackOff
	  Normal   Pulling    22s (x2 over 90s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-9f67c86d4-zbhnt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074400/192.168.39.13
	Start Time:       Tue, 09 Dec 2025 02:19:40 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qf9x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4qf9x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  96s                default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-zbhnt to functional-074400
	  Warning  Failed     65s                kubelet            Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     65s                kubelet            Error: ErrImagePull
	  Normal   BackOff    64s                kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     64s                kubelet            Error: ImagePullBackOff
	  Normal   Pulling    49s (x2 over 95s)  kubelet            Pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-2cgfd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-j58f5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-074400 describe pod busybox-mount hello-node-5758569b79-qkj2j hello-node-connect-9f67c86d4-zbhnt dashboard-metrics-scraper-5565989548-2cgfd kubernetes-dashboard-b84665fb8-j58f5: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (4.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (603.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-074400 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-074400 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-zbhnt" [0fb77730-c093-4c34-b77c-749ed6480841] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-074400 -n functional-074400
functional_test.go:1645: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-09 02:29:40.966638336 +0000 UTC m=+2044.560258407
functional_test.go:1645: (dbg) Run:  kubectl --context functional-074400 describe po hello-node-connect-9f67c86d4-zbhnt -n default
functional_test.go:1645: (dbg) kubectl --context functional-074400 describe po hello-node-connect-9f67c86d4-zbhnt -n default:
Name:             hello-node-connect-9f67c86d4-zbhnt
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-074400/192.168.39.13
Start Time:       Tue, 09 Dec 2025 02:19:40 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qf9x (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-4qf9x:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-zbhnt to functional-074400
Warning  Failed     9m30s                 kubelet            Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m31s (x2 over 6m2s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m3s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     54s (x5 over 9m30s)   kubelet            Error: ErrImagePull
Warning  Failed     54s (x2 over 8m8s)    kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    4s (x15 over 9m29s)   kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4s (x15 over 9m29s)   kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-074400 logs hello-node-connect-9f67c86d4-zbhnt -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-074400 logs hello-node-connect-9f67c86d4-zbhnt -n default: exit status 1 (78.911014ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-zbhnt" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-074400 logs hello-node-connect-9f67c86d4-zbhnt -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-074400 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-9f67c86d4-zbhnt
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-074400/192.168.39.13
Start Time:       Tue, 09 Dec 2025 02:19:40 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qf9x (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-4qf9x:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-zbhnt to functional-074400
Warning  Failed     9m30s                 kubelet            Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m31s (x2 over 6m2s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m3s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     54s (x5 over 9m30s)   kubelet            Error: ErrImagePull
Warning  Failed     54s (x2 over 8m8s)    kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    4s (x15 over 9m29s)   kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4s (x15 over 9m29s)   kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-074400 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-074400 logs -l app=hello-node-connect: exit status 1 (76.299811ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-zbhnt" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-074400 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-074400 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.107.58.57
IPs:                      10.107.58.57
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32041/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-074400 -n functional-074400
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-074400 logs -n 25: (1.46998513s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                    ARGS                                                                     │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-074400 ssh findmnt -T /mount-9p | grep 9p                                                                                        │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ ssh            │ functional-074400 ssh findmnt -T /mount-9p | grep 9p                                                                                        │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ ssh            │ functional-074400 ssh -- ls -la /mount-9p                                                                                                   │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ ssh            │ functional-074400 ssh sudo umount -f /mount-9p                                                                                              │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ mount          │ -p functional-074400 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2669497316/001:/mount2 --alsologtostderr -v=1        │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ mount          │ -p functional-074400 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2669497316/001:/mount1 --alsologtostderr -v=1        │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ mount          │ -p functional-074400 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2669497316/001:/mount3 --alsologtostderr -v=1        │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ ssh            │ functional-074400 ssh findmnt -T /mount1                                                                                                    │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ ssh            │ functional-074400 ssh findmnt -T /mount1                                                                                                    │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ ssh            │ functional-074400 ssh findmnt -T /mount2                                                                                                    │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ ssh            │ functional-074400 ssh findmnt -T /mount3                                                                                                    │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │ 09 Dec 25 02:20 UTC │
	│ mount          │ -p functional-074400 --kill=true                                                                                                            │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:20 UTC │                     │
	│ start          │ -p functional-074400 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:21 UTC │                     │
	│ start          │ -p functional-074400 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0           │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:21 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-074400 --alsologtostderr -v=1                                                                              │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:21 UTC │                     │
	│ update-context │ functional-074400 update-context --alsologtostderr -v=2                                                                                     │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:21 UTC │ 09 Dec 25 02:21 UTC │
	│ update-context │ functional-074400 update-context --alsologtostderr -v=2                                                                                     │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:21 UTC │ 09 Dec 25 02:21 UTC │
	│ update-context │ functional-074400 update-context --alsologtostderr -v=2                                                                                     │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:21 UTC │ 09 Dec 25 02:21 UTC │
	│ image          │ functional-074400 image ls --format short --alsologtostderr                                                                                 │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:21 UTC │ 09 Dec 25 02:21 UTC │
	│ image          │ functional-074400 image ls --format yaml --alsologtostderr                                                                                  │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:21 UTC │ 09 Dec 25 02:21 UTC │
	│ ssh            │ functional-074400 ssh pgrep buildkitd                                                                                                       │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:21 UTC │                     │
	│ image          │ functional-074400 image build -t localhost/my-image:functional-074400 testdata/build --alsologtostderr                                      │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:21 UTC │ 09 Dec 25 02:21 UTC │
	│ image          │ functional-074400 image ls --format json --alsologtostderr                                                                                  │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:21 UTC │ 09 Dec 25 02:21 UTC │
	│ image          │ functional-074400 image ls --format table --alsologtostderr                                                                                 │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:21 UTC │ 09 Dec 25 02:21 UTC │
	│ image          │ functional-074400 image ls                                                                                                                  │ functional-074400 │ jenkins │ v1.37.0 │ 09 Dec 25 02:21 UTC │ 09 Dec 25 02:21 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 02:21:11
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 02:21:11.668557  270845 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:21:11.668720  270845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:21:11.668728  270845 out.go:374] Setting ErrFile to fd 2...
	I1209 02:21:11.668735  270845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:21:11.669124  270845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 02:21:11.669862  270845 out.go:368] Setting JSON to false
	I1209 02:21:11.671248  270845 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":29022,"bootTime":1765217850,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:21:11.671349  270845 start.go:143] virtualization: kvm guest
	I1209 02:21:11.674987  270845 out.go:179] * [functional-074400] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:21:11.677897  270845 notify.go:221] Checking for updates...
	I1209 02:21:11.678058  270845 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:21:11.685602  270845 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:21:11.687614  270845 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 02:21:11.696389  270845 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	I1209 02:21:11.707660  270845 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:21:11.712881  270845 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:21:11.723017  270845 config.go:182] Loaded profile config "functional-074400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:21:11.723551  270845 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:21:11.824590  270845 out.go:179] * Using the kvm2 driver based on existing profile
	I1209 02:21:11.841576  270845 start.go:309] selected driver: kvm2
	I1209 02:21:11.841611  270845 start.go:927] validating driver "kvm2" against &{Name:functional-074400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-074400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:21:11.841758  270845 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:21:11.843215  270845 cni.go:84] Creating CNI manager for ""
	I1209 02:21:11.843311  270845 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 02:21:11.843361  270845 start.go:353] cluster config:
	{Name:functional-074400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074400 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:21:11.866464  270845 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.035312808Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765247382035284837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:240697,},InodesUsed:&UInt64Value{Value:104,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8237a591-7b59-4086-922a-7a903e31e9e2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.036512499Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a8ff5ac-574d-4279-aa24-4410ef19afbe name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.036729008Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a8ff5ac-574d-4279-aa24-4410ef19afbe name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.037264624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:93afd08a59ada97d1a31a37780c4f5983e4a5e3ddb9c08fa3f3d59d42259bbbb,PodSandboxId:2159112d0fa49467af9e1a91bf079d7c7c6002945edcc7b75abdec2da7af7814,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765246865079041234,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 431c06e4-5599-47d4-8f8e-fe047be3b9b9,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef945be98899fa8b02dd19aff3b8c77eb0f0c6ce94c3a69e0652753f1ff55a3,PodSandboxId:e7d154816b9d1fa1ac2af1fd38e335654c107702110bc6d8ec71c7e2051b9b93,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765246863291902987,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-r489h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a98ebf9-c223-495b-9d6b-890b748749e8,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b89d37aeec53b9d6ee80b63c22598071f010cd29423aebcc64906de620467314,PodSandboxId:8bf7207eb36542c95d86af3a2d3d637df56a27722fc7e059e32f9a59816f343a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765246813722508450,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 315ec66e-345d-4c53-a2a6-50f943add31b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14173d291caa4718370d24685ac59608d6db0097c2bb19631282e71389726769,PodSandboxId:c29d3b18a6a3610837f59130f5440c294d48219315b97d32918d55c95bc57db8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765246756144763673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqmgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7e3ff87-a95f-47b2-8a2b-c259bae12d26,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.res
tartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96bf9fc86c3c3ab191e90888c1eca1ed3482b2143a2ff87819c0e3b7b5aa541c,PodSandboxId:36a619ae05fdaec1e59a961632b7b4c39e227dbeb5c5d7fb7ab3e266ef416151,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765246756145489157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-jc7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43eea0bb-be89-4179-aa9e-6c2354730e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527eadb8601da5def17eaa2edb74967c026667ca94cc89d7396a212f7a334be8,PodSandboxId:c8ced35f3f07a378f22ed5f8f42d12c198079192e611afbbcdb40c044f19bfa7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:17652467535
79574512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e8c17203ca10918c9f38c7d0e332c8,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7e187bd9a3a6318fa2d523b92b9013a408f816ace5db8c1c222c7793427524f,PodSandboxId:823ae767de64dda5ab0fd523e009c04adda0bffb5e3d3923a0300aced80dd593,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7b
b6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765246753375190471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c61e57199dc27febe166ded52cc142d7,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5720f563ee852c04aade1c1dcfe0527e5d0b9cafff5f5324ea01a48641c2b879,PodSandboxId:f3953026b79acab4142afb82c0f731f122d103e98e4442c14ce7bf8018d8d677,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b
2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765246753347742444,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e1bacf59823142e176506131e39c07,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74ea4fb7f9b159bbd43d1477cadc8ec4c8fb68dcc7a6dd47eff4f26721c65aa,PodSandboxId:a08cf4f29ba9239e238b67ae93da1ccda30c769bd690fffb
ffd5375cb9f1ea16,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765246753320140788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71e4e0a7d57a210efdc84f9032753d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cfeb83409be8ec96c0b53f9c541a123c60a81424ed708b3
16900cfc6bac7634,PodSandboxId:fbd0c8d71de5a5378582509596f03922f8e7355ba930bdb0dc355697943c8a7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765246750746936761,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a490d4a5-6b55-4d29-b267-b700cba89a87,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e07378628f06fd0471083dd2035277e9a204d73d71caa2149d2520334a5a
8780,PodSandboxId:36a619ae05fdaec1e59a961632b7b4c39e227dbeb5c5d7fb7ab3e266ef416151,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765246728146643003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-jc7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43eea0bb-be89-4179-aa9e-6c2354730e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readines
s-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93888ae1c2ed9e59841961ffbd087abe1b960432ec14bd6ede69fe08b06f6528,PodSandboxId:fbd0c8d71de5a5378582509596f03922f8e7355ba930bdb0dc355697943c8a7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765246727258029938,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a490d4a5-6b55-4d29-b267-b700cba89a87,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b12a876b1607656dcae43a6f788e50cffa4515f9e671385f84c9294e3f8ea253,PodSandboxId:a08cf4f29ba9239e238b67ae93da1ccda30c769bd690fffbffd5375cb9f1ea16,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765246727110935066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71e4e0a7d57a210efdc84f9032753d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,i
o.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacf40dd7a6ed51348aa2ab49f8b52e918b4dfd14fb9632d8e828165a44be415,PodSandboxId:823ae767de64dda5ab0fd523e009c04adda0bffb5e3d3923a0300aced80dd593,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765246726970198747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: c61e57199dc27febe166ded52cc142d7,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4564cf1f6c21fbf6c388bc6c5703fdc330467e6a7b0e87256575ffbb8496510c,PodSandboxId:f3953026b79acab4142afb82c0f731f122d103e98e4442c14ce7bf8018d8d677,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765246726898352510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e1bacf59823142e176506131e39c07,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:823c02139d722aeeb211b14581e8aa5f8644ac71dc817330ea18d811ea6d2be9,PodSandboxId:c29d3b18a6a3610837f59130f5440c294d48219315b97d32918d55c95bc57db8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,St
ate:CONTAINER_EXITED,CreatedAt:1765246726822665281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqmgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7e3ff87-a95f-47b2-8a2b-c259bae12d26,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a8ff5ac-574d-4279-aa24-4410ef19afbe name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.081439040Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8f82bfe-ab09-4fe3-8ce4-9157d76f36b8 name=/runtime.v1.RuntimeService/Version
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.081545009Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8f82bfe-ab09-4fe3-8ce4-9157d76f36b8 name=/runtime.v1.RuntimeService/Version
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.083543111Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15f50000-2732-4a74-ba77-8b45bdb1f1af name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.084299537Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765247382084270942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:240697,},InodesUsed:&UInt64Value{Value:104,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15f50000-2732-4a74-ba77-8b45bdb1f1af name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.085709064Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d842466-f9d3-4bac-a03f-25fda6ae99e5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.085846568Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d842466-f9d3-4bac-a03f-25fda6ae99e5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.086381392Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:93afd08a59ada97d1a31a37780c4f5983e4a5e3ddb9c08fa3f3d59d42259bbbb,PodSandboxId:2159112d0fa49467af9e1a91bf079d7c7c6002945edcc7b75abdec2da7af7814,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765246865079041234,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 431c06e4-5599-47d4-8f8e-fe047be3b9b9,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef945be98899fa8b02dd19aff3b8c77eb0f0c6ce94c3a69e0652753f1ff55a3,PodSandboxId:e7d154816b9d1fa1ac2af1fd38e335654c107702110bc6d8ec71c7e2051b9b93,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765246863291902987,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-r489h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a98ebf9-c223-495b-9d6b-890b748749e8,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b89d37aeec53b9d6ee80b63c22598071f010cd29423aebcc64906de620467314,PodSandboxId:8bf7207eb36542c95d86af3a2d3d637df56a27722fc7e059e32f9a59816f343a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765246813722508450,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 315ec66e-345d-4c53-a2a6-50f943add31b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14173d291caa4718370d24685ac59608d6db0097c2bb19631282e71389726769,PodSandboxId:c29d3b18a6a3610837f59130f5440c294d48219315b97d32918d55c95bc57db8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765246756144763673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqmgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7e3ff87-a95f-47b2-8a2b-c259bae12d26,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.res
tartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96bf9fc86c3c3ab191e90888c1eca1ed3482b2143a2ff87819c0e3b7b5aa541c,PodSandboxId:36a619ae05fdaec1e59a961632b7b4c39e227dbeb5c5d7fb7ab3e266ef416151,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765246756145489157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-jc7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43eea0bb-be89-4179-aa9e-6c2354730e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527eadb8601da5def17eaa2edb74967c026667ca94cc89d7396a212f7a334be8,PodSandboxId:c8ced35f3f07a378f22ed5f8f42d12c198079192e611afbbcdb40c044f19bfa7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:17652467535
79574512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e8c17203ca10918c9f38c7d0e332c8,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7e187bd9a3a6318fa2d523b92b9013a408f816ace5db8c1c222c7793427524f,PodSandboxId:823ae767de64dda5ab0fd523e009c04adda0bffb5e3d3923a0300aced80dd593,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7b
b6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765246753375190471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c61e57199dc27febe166ded52cc142d7,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5720f563ee852c04aade1c1dcfe0527e5d0b9cafff5f5324ea01a48641c2b879,PodSandboxId:f3953026b79acab4142afb82c0f731f122d103e98e4442c14ce7bf8018d8d677,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b
2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765246753347742444,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e1bacf59823142e176506131e39c07,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74ea4fb7f9b159bbd43d1477cadc8ec4c8fb68dcc7a6dd47eff4f26721c65aa,PodSandboxId:a08cf4f29ba9239e238b67ae93da1ccda30c769bd690fffb
ffd5375cb9f1ea16,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765246753320140788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71e4e0a7d57a210efdc84f9032753d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cfeb83409be8ec96c0b53f9c541a123c60a81424ed708b3
16900cfc6bac7634,PodSandboxId:fbd0c8d71de5a5378582509596f03922f8e7355ba930bdb0dc355697943c8a7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765246750746936761,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a490d4a5-6b55-4d29-b267-b700cba89a87,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e07378628f06fd0471083dd2035277e9a204d73d71caa2149d2520334a5a
8780,PodSandboxId:36a619ae05fdaec1e59a961632b7b4c39e227dbeb5c5d7fb7ab3e266ef416151,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765246728146643003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-jc7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43eea0bb-be89-4179-aa9e-6c2354730e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readines
s-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93888ae1c2ed9e59841961ffbd087abe1b960432ec14bd6ede69fe08b06f6528,PodSandboxId:fbd0c8d71de5a5378582509596f03922f8e7355ba930bdb0dc355697943c8a7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765246727258029938,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a490d4a5-6b55-4d29-b267-b700cba89a87,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b12a876b1607656dcae43a6f788e50cffa4515f9e671385f84c9294e3f8ea253,PodSandboxId:a08cf4f29ba9239e238b67ae93da1ccda30c769bd690fffbffd5375cb9f1ea16,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765246727110935066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71e4e0a7d57a210efdc84f9032753d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,i
o.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacf40dd7a6ed51348aa2ab49f8b52e918b4dfd14fb9632d8e828165a44be415,PodSandboxId:823ae767de64dda5ab0fd523e009c04adda0bffb5e3d3923a0300aced80dd593,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765246726970198747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: c61e57199dc27febe166ded52cc142d7,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4564cf1f6c21fbf6c388bc6c5703fdc330467e6a7b0e87256575ffbb8496510c,PodSandboxId:f3953026b79acab4142afb82c0f731f122d103e98e4442c14ce7bf8018d8d677,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765246726898352510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e1bacf59823142e176506131e39c07,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:823c02139d722aeeb211b14581e8aa5f8644ac71dc817330ea18d811ea6d2be9,PodSandboxId:c29d3b18a6a3610837f59130f5440c294d48219315b97d32918d55c95bc57db8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,St
ate:CONTAINER_EXITED,CreatedAt:1765246726822665281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqmgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7e3ff87-a95f-47b2-8a2b-c259bae12d26,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d842466-f9d3-4bac-a03f-25fda6ae99e5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.120774703Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6ad34910-e217-4da9-8a3e-9797e7106e12 name=/runtime.v1.RuntimeService/Version
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.120993795Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6ad34910-e217-4da9-8a3e-9797e7106e12 name=/runtime.v1.RuntimeService/Version
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.122675208Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=22f69ddb-5551-40f8-8851-fb06b06f3d53 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.123482569Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765247382123454659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:240697,},InodesUsed:&UInt64Value{Value:104,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=22f69ddb-5551-40f8-8851-fb06b06f3d53 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.124641946Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0cb2411-b6b9-48e2-8205-dd16a2359e53 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.124718651Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0cb2411-b6b9-48e2-8205-dd16a2359e53 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.125198078Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:93afd08a59ada97d1a31a37780c4f5983e4a5e3ddb9c08fa3f3d59d42259bbbb,PodSandboxId:2159112d0fa49467af9e1a91bf079d7c7c6002945edcc7b75abdec2da7af7814,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765246865079041234,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 431c06e4-5599-47d4-8f8e-fe047be3b9b9,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef945be98899fa8b02dd19aff3b8c77eb0f0c6ce94c3a69e0652753f1ff55a3,PodSandboxId:e7d154816b9d1fa1ac2af1fd38e335654c107702110bc6d8ec71c7e2051b9b93,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765246863291902987,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-r489h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a98ebf9-c223-495b-9d6b-890b748749e8,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b89d37aeec53b9d6ee80b63c22598071f010cd29423aebcc64906de620467314,PodSandboxId:8bf7207eb36542c95d86af3a2d3d637df56a27722fc7e059e32f9a59816f343a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765246813722508450,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 315ec66e-345d-4c53-a2a6-50f943add31b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14173d291caa4718370d24685ac59608d6db0097c2bb19631282e71389726769,PodSandboxId:c29d3b18a6a3610837f59130f5440c294d48219315b97d32918d55c95bc57db8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765246756144763673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqmgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7e3ff87-a95f-47b2-8a2b-c259bae12d26,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.res
tartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96bf9fc86c3c3ab191e90888c1eca1ed3482b2143a2ff87819c0e3b7b5aa541c,PodSandboxId:36a619ae05fdaec1e59a961632b7b4c39e227dbeb5c5d7fb7ab3e266ef416151,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765246756145489157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-jc7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43eea0bb-be89-4179-aa9e-6c2354730e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527eadb8601da5def17eaa2edb74967c026667ca94cc89d7396a212f7a334be8,PodSandboxId:c8ced35f3f07a378f22ed5f8f42d12c198079192e611afbbcdb40c044f19bfa7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:17652467535
79574512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e8c17203ca10918c9f38c7d0e332c8,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7e187bd9a3a6318fa2d523b92b9013a408f816ace5db8c1c222c7793427524f,PodSandboxId:823ae767de64dda5ab0fd523e009c04adda0bffb5e3d3923a0300aced80dd593,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7b
b6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765246753375190471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c61e57199dc27febe166ded52cc142d7,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5720f563ee852c04aade1c1dcfe0527e5d0b9cafff5f5324ea01a48641c2b879,PodSandboxId:f3953026b79acab4142afb82c0f731f122d103e98e4442c14ce7bf8018d8d677,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b
2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765246753347742444,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e1bacf59823142e176506131e39c07,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74ea4fb7f9b159bbd43d1477cadc8ec4c8fb68dcc7a6dd47eff4f26721c65aa,PodSandboxId:a08cf4f29ba9239e238b67ae93da1ccda30c769bd690fffb
ffd5375cb9f1ea16,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765246753320140788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71e4e0a7d57a210efdc84f9032753d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cfeb83409be8ec96c0b53f9c541a123c60a81424ed708b3
16900cfc6bac7634,PodSandboxId:fbd0c8d71de5a5378582509596f03922f8e7355ba930bdb0dc355697943c8a7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765246750746936761,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a490d4a5-6b55-4d29-b267-b700cba89a87,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e07378628f06fd0471083dd2035277e9a204d73d71caa2149d2520334a5a
8780,PodSandboxId:36a619ae05fdaec1e59a961632b7b4c39e227dbeb5c5d7fb7ab3e266ef416151,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765246728146643003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-jc7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43eea0bb-be89-4179-aa9e-6c2354730e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readines
s-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93888ae1c2ed9e59841961ffbd087abe1b960432ec14bd6ede69fe08b06f6528,PodSandboxId:fbd0c8d71de5a5378582509596f03922f8e7355ba930bdb0dc355697943c8a7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765246727258029938,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a490d4a5-6b55-4d29-b267-b700cba89a87,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b12a876b1607656dcae43a6f788e50cffa4515f9e671385f84c9294e3f8ea253,PodSandboxId:a08cf4f29ba9239e238b67ae93da1ccda30c769bd690fffbffd5375cb9f1ea16,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765246727110935066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71e4e0a7d57a210efdc84f9032753d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,i
o.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacf40dd7a6ed51348aa2ab49f8b52e918b4dfd14fb9632d8e828165a44be415,PodSandboxId:823ae767de64dda5ab0fd523e009c04adda0bffb5e3d3923a0300aced80dd593,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765246726970198747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: c61e57199dc27febe166ded52cc142d7,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4564cf1f6c21fbf6c388bc6c5703fdc330467e6a7b0e87256575ffbb8496510c,PodSandboxId:f3953026b79acab4142afb82c0f731f122d103e98e4442c14ce7bf8018d8d677,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765246726898352510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e1bacf59823142e176506131e39c07,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:823c02139d722aeeb211b14581e8aa5f8644ac71dc817330ea18d811ea6d2be9,PodSandboxId:c29d3b18a6a3610837f59130f5440c294d48219315b97d32918d55c95bc57db8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,St
ate:CONTAINER_EXITED,CreatedAt:1765246726822665281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqmgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7e3ff87-a95f-47b2-8a2b-c259bae12d26,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e0cb2411-b6b9-48e2-8205-dd16a2359e53 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.169827725Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a8c89707-9a69-4d07-9888-3497d35845a7 name=/runtime.v1.RuntimeService/Version
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.170216273Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a8c89707-9a69-4d07-9888-3497d35845a7 name=/runtime.v1.RuntimeService/Version
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.172964692Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fc2fe138-a6a8-4556-89a9-fa1468b1359f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.173746007Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765247382173719123,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:240697,},InodesUsed:&UInt64Value{Value:104,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc2fe138-a6a8-4556-89a9-fa1468b1359f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.174941307Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abaf9de2-5c49-4bfd-83cc-7d35e4591a7a name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.175015060Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abaf9de2-5c49-4bfd-83cc-7d35e4591a7a name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 02:29:42 functional-074400 crio[5283]: time="2025-12-09 02:29:42.175627969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:93afd08a59ada97d1a31a37780c4f5983e4a5e3ddb9c08fa3f3d59d42259bbbb,PodSandboxId:2159112d0fa49467af9e1a91bf079d7c7c6002945edcc7b75abdec2da7af7814,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765246865079041234,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 431c06e4-5599-47d4-8f8e-fe047be3b9b9,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fef945be98899fa8b02dd19aff3b8c77eb0f0c6ce94c3a69e0652753f1ff55a3,PodSandboxId:e7d154816b9d1fa1ac2af1fd38e335654c107702110bc6d8ec71c7e2051b9b93,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765246863291902987,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-r489h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a98ebf9-c223-495b-9d6b-890b748749e8,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b89d37aeec53b9d6ee80b63c22598071f010cd29423aebcc64906de620467314,PodSandboxId:8bf7207eb36542c95d86af3a2d3d637df56a27722fc7e059e32f9a59816f343a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765246813722508450,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 315ec66e-345d-4c53-a2a6-50f943add31b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14173d291caa4718370d24685ac59608d6db0097c2bb19631282e71389726769,PodSandboxId:c29d3b18a6a3610837f59130f5440c294d48219315b97d32918d55c95bc57db8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765246756144763673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqmgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7e3ff87-a95f-47b2-8a2b-c259bae12d26,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.res
tartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96bf9fc86c3c3ab191e90888c1eca1ed3482b2143a2ff87819c0e3b7b5aa541c,PodSandboxId:36a619ae05fdaec1e59a961632b7b4c39e227dbeb5c5d7fb7ab3e266ef416151,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765246756145489157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-jc7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43eea0bb-be89-4179-aa9e-6c2354730e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527eadb8601da5def17eaa2edb74967c026667ca94cc89d7396a212f7a334be8,PodSandboxId:c8ced35f3f07a378f22ed5f8f42d12c198079192e611afbbcdb40c044f19bfa7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:17652467535
79574512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98e8c17203ca10918c9f38c7d0e332c8,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7e187bd9a3a6318fa2d523b92b9013a408f816ace5db8c1c222c7793427524f,PodSandboxId:823ae767de64dda5ab0fd523e009c04adda0bffb5e3d3923a0300aced80dd593,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7b
b6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765246753375190471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c61e57199dc27febe166ded52cc142d7,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5720f563ee852c04aade1c1dcfe0527e5d0b9cafff5f5324ea01a48641c2b879,PodSandboxId:f3953026b79acab4142afb82c0f731f122d103e98e4442c14ce7bf8018d8d677,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b
2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765246753347742444,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e1bacf59823142e176506131e39c07,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74ea4fb7f9b159bbd43d1477cadc8ec4c8fb68dcc7a6dd47eff4f26721c65aa,PodSandboxId:a08cf4f29ba9239e238b67ae93da1ccda30c769bd690fffb
ffd5375cb9f1ea16,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765246753320140788,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71e4e0a7d57a210efdc84f9032753d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cfeb83409be8ec96c0b53f9c541a123c60a81424ed708b3
16900cfc6bac7634,PodSandboxId:fbd0c8d71de5a5378582509596f03922f8e7355ba930bdb0dc355697943c8a7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765246750746936761,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a490d4a5-6b55-4d29-b267-b700cba89a87,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e07378628f06fd0471083dd2035277e9a204d73d71caa2149d2520334a5a
8780,PodSandboxId:36a619ae05fdaec1e59a961632b7b4c39e227dbeb5c5d7fb7ab3e266ef416151,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765246728146643003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-jc7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43eea0bb-be89-4179-aa9e-6c2354730e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readines
s-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93888ae1c2ed9e59841961ffbd087abe1b960432ec14bd6ede69fe08b06f6528,PodSandboxId:fbd0c8d71de5a5378582509596f03922f8e7355ba930bdb0dc355697943c8a7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765246727258029938,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a490d4a5-6b55-4d29-b267-b700cba89a87,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b12a876b1607656dcae43a6f788e50cffa4515f9e671385f84c9294e3f8ea253,PodSandboxId:a08cf4f29ba9239e238b67ae93da1ccda30c769bd690fffbffd5375cb9f1ea16,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765246727110935066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc71e4e0a7d57a210efdc84f9032753d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,i
o.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacf40dd7a6ed51348aa2ab49f8b52e918b4dfd14fb9632d8e828165a44be415,PodSandboxId:823ae767de64dda5ab0fd523e009c04adda0bffb5e3d3923a0300aced80dd593,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765246726970198747,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: c61e57199dc27febe166ded52cc142d7,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4564cf1f6c21fbf6c388bc6c5703fdc330467e6a7b0e87256575ffbb8496510c,PodSandboxId:f3953026b79acab4142afb82c0f731f122d103e98e4442c14ce7bf8018d8d677,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765246726898352510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-
manager,io.kubernetes.pod.name: kube-controller-manager-functional-074400,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e1bacf59823142e176506131e39c07,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:823c02139d722aeeb211b14581e8aa5f8644ac71dc817330ea18d811ea6d2be9,PodSandboxId:c29d3b18a6a3610837f59130f5440c294d48219315b97d32918d55c95bc57db8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,St
ate:CONTAINER_EXITED,CreatedAt:1765246726822665281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kqmgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7e3ff87-a95f-47b2-8a2b-c259bae12d26,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abaf9de2-5c49-4bfd-83cc-7d35e4591a7a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                         CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	93afd08a59ada       d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9                                              8 minutes ago       Running             myfrontend                0                   2159112d0fa49       sp-pod                                      default
	fef945be98899       public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036   8 minutes ago       Running             mysql                     0                   e7d154816b9d1       mysql-7d7b65bc95-r489h                      default
	b89d37aeec53b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e           9 minutes ago       Exited              mount-munger              0                   8bf7207eb3654       busybox-mount                               default
	96bf9fc86c3c3       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                              10 minutes ago      Running             coredns                   3                   36a619ae05fda       coredns-7d764666f9-jc7zv                    kube-system
	14173d291caa4       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                              10 minutes ago      Running             kube-proxy                3                   c29d3b18a6a36       kube-proxy-kqmgp                            kube-system
	527eadb8601da       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                              10 minutes ago      Running             kube-apiserver            0                   c8ced35f3f07a       kube-apiserver-functional-074400            kube-system
	d7e187bd9a3a6       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                              10 minutes ago      Running             kube-scheduler            3                   823ae767de64d       kube-scheduler-functional-074400            kube-system
	5720f563ee852       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                              10 minutes ago      Running             kube-controller-manager   3                   f3953026b79ac       kube-controller-manager-functional-074400   kube-system
	a74ea4fb7f9b1       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              10 minutes ago      Running             etcd                      3                   a08cf4f29ba92       etcd-functional-074400                      kube-system
	5cfeb83409be8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              10 minutes ago      Running             storage-provisioner       4                   fbd0c8d71de5a       storage-provisioner                         kube-system
	e07378628f06f       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                              10 minutes ago      Exited              coredns                   2                   36a619ae05fda       coredns-7d764666f9-jc7zv                    kube-system
	93888ae1c2ed9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              10 minutes ago      Exited              storage-provisioner       3                   fbd0c8d71de5a       storage-provisioner                         kube-system
	b12a876b16076       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              10 minutes ago      Exited              etcd                      2                   a08cf4f29ba92       etcd-functional-074400                      kube-system
	bacf40dd7a6ed       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                              10 minutes ago      Exited              kube-scheduler            2                   823ae767de64d       kube-scheduler-functional-074400            kube-system
	4564cf1f6c21f       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                              10 minutes ago      Exited              kube-controller-manager   2                   f3953026b79ac       kube-controller-manager-functional-074400   kube-system
	823c02139d722       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                              10 minutes ago      Exited              kube-proxy                2                   c29d3b18a6a36       kube-proxy-kqmgp                            kube-system
	
	
	==> coredns [96bf9fc86c3c3ab191e90888c1eca1ed3482b2143a2ff87819c0e3b7b5aa541c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:60550 - 1226 "HINFO IN 720540791316241156.5827029367603844758. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.024036833s
	
	
	==> coredns [e07378628f06fd0471083dd2035277e9a204d73d71caa2149d2520334a5a8780] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:51973 - 8162 "HINFO IN 7492499431195076065.3095992251303409408. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033376104s
	
	
	==> describe nodes <==
	Name:               functional-074400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-074400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=functional-074400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T02_17_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 02:17:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-074400
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 02:29:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 02:27:35 +0000   Tue, 09 Dec 2025 02:17:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 02:27:35 +0000   Tue, 09 Dec 2025 02:17:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 02:27:35 +0000   Tue, 09 Dec 2025 02:17:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 02:27:35 +0000   Tue, 09 Dec 2025 02:17:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.13
	  Hostname:    functional-074400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 db00c3ccb5e94810b925118d8c6c365e
	  System UUID:                db00c3cc-b5e9-4810-b925-118d8c6c365e
	  Boot ID:                    dda03a86-a51a-4ba3-a580-4d8b50831a16
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-qkj2j                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m57s
	  default                     hello-node-connect-9f67c86d4-zbhnt            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-7d7b65bc95-r489h                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    9m23s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m38s
	  kube-system                 coredns-7d764666f9-jc7zv                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-074400                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-074400              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-074400     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-kqmgp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-074400              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-2cgfd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m29s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-j58f5          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                Age   From             Message
	  ----    ------                ----  ----             -------
	  Normal  RegisteredNode        12m   node-controller  Node functional-074400 event: Registered Node functional-074400 in Controller
	  Normal  CIDRAssignmentFailed  12m   cidrAllocator    Node functional-074400 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode        11m   node-controller  Node functional-074400 event: Registered Node functional-074400 in Controller
	  Normal  RegisteredNode        10m   node-controller  Node functional-074400 event: Registered Node functional-074400 in Controller
	  Normal  RegisteredNode        10m   node-controller  Node functional-074400 event: Registered Node functional-074400 in Controller
	
	
	==> dmesg <==
	[Dec 9 02:17] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.141349] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.000029] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.639682] kauditd_printk_skb: 252 callbacks suppressed
	[ +25.087039] kauditd_printk_skb: 38 callbacks suppressed
	[Dec 9 02:18] kauditd_printk_skb: 11 callbacks suppressed
	[  +1.077221] kauditd_printk_skb: 183 callbacks suppressed
	[  +0.969669] kauditd_printk_skb: 185 callbacks suppressed
	[  +0.117305] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.675621] kauditd_printk_skb: 78 callbacks suppressed
	[  +6.655370] kauditd_printk_skb: 253 callbacks suppressed
	[Dec 9 02:19] kauditd_printk_skb: 19 callbacks suppressed
	[  +0.342083] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.131658] kauditd_printk_skb: 8 callbacks suppressed
	[  +4.871040] kauditd_printk_skb: 129 callbacks suppressed
	[  +0.098874] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.134598] kauditd_printk_skb: 74 callbacks suppressed
	[Dec 9 02:20] kauditd_printk_skb: 74 callbacks suppressed
	[  +3.625583] kauditd_printk_skb: 31 callbacks suppressed
	[ +25.077425] kauditd_printk_skb: 38 callbacks suppressed
	[  +8.783835] kauditd_printk_skb: 11 callbacks suppressed
	[Dec 9 02:21] kauditd_printk_skb: 8 callbacks suppressed
	[  +8.594909] kauditd_printk_skb: 64 callbacks suppressed
	[  +2.712247] crun[10661]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +2.981090] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [a74ea4fb7f9b159bbd43d1477cadc8ec4c8fb68dcc7a6dd47eff4f26721c65aa] <==
	{"level":"warn","ts":"2025-12-09T02:20:58.207606Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T02:20:57.871508Z","time spent":"335.887273ms","remote":"127.0.0.1:37214","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-svgerveyegzzmwaexgf6tstxpq\" mod_revision:829 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-svgerveyegzzmwaexgf6tstxpq\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-svgerveyegzzmwaexgf6tstxpq\" > >"}
	{"level":"warn","ts":"2025-12-09T02:20:58.206529Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.832277ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:20:58.208922Z","caller":"traceutil/trace.go:172","msg":"trace[526939693] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:846; }","duration":"128.240337ms","start":"2025-12-09T02:20:58.080671Z","end":"2025-12-09T02:20:58.208912Z","steps":["trace[526939693] 'agreement among raft nodes before linearized reading'  (duration: 125.803196ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:21:01.537930Z","caller":"traceutil/trace.go:172","msg":"trace[1838618841] linearizableReadLoop","detail":"{readStateIndex:949; appliedIndex:949; }","duration":"474.214732ms","start":"2025-12-09T02:21:01.063699Z","end":"2025-12-09T02:21:01.537914Z","steps":["trace[1838618841] 'read index received'  (duration: 474.208744ms)","trace[1838618841] 'applied index is now lower than readState.Index'  (duration: 5.143µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-09T02:21:01.538134Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"474.487407ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:21:01.538156Z","caller":"traceutil/trace.go:172","msg":"trace[1753031098] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:847; }","duration":"474.522854ms","start":"2025-12-09T02:21:01.063627Z","end":"2025-12-09T02:21:01.538150Z","steps":["trace[1753031098] 'agreement among raft nodes before linearized reading'  (duration: 474.401975ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:21:01.538205Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T02:21:01.063605Z","time spent":"474.592817ms","remote":"127.0.0.1:37060","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-12-09T02:21:01.538287Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"456.892171ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:21:01.538322Z","caller":"traceutil/trace.go:172","msg":"trace[404851745] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:848; }","duration":"456.933995ms","start":"2025-12-09T02:21:01.081381Z","end":"2025-12-09T02:21:01.538315Z","steps":["trace[404851745] 'agreement among raft nodes before linearized reading'  (duration: 456.877994ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:21:01.538341Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T02:21:01.081362Z","time spent":"456.975262ms","remote":"127.0.0.1:37060","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-12-09T02:21:01.538428Z","caller":"traceutil/trace.go:172","msg":"trace[1888258504] transaction","detail":"{read_only:false; response_revision:848; number_of_response:1; }","duration":"600.591481ms","start":"2025-12-09T02:21:00.937830Z","end":"2025-12-09T02:21:01.538422Z","steps":["trace[1888258504] 'process raft request'  (duration: 600.281604ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:21:01.538486Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T02:21:00.937807Z","time spent":"600.639337ms","remote":"127.0.0.1:37012","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:847 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-12-09T02:21:01.538491Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"354.244835ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:21:01.538506Z","caller":"traceutil/trace.go:172","msg":"trace[1043831555] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:848; }","duration":"354.260366ms","start":"2025-12-09T02:21:01.184241Z","end":"2025-12-09T02:21:01.538502Z","steps":["trace[1043831555] 'agreement among raft nodes before linearized reading'  (duration: 354.236536ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:21:01.538946Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"388.934728ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:21:01.538996Z","caller":"traceutil/trace.go:172","msg":"trace[6101682] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:848; }","duration":"389.142667ms","start":"2025-12-09T02:21:01.149848Z","end":"2025-12-09T02:21:01.538991Z","steps":["trace[6101682] 'agreement among raft nodes before linearized reading'  (duration: 388.921164ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:21:01.539025Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T02:21:01.149736Z","time spent":"389.275312ms","remote":"127.0.0.1:36722","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-12-09T02:21:11.978710Z","caller":"traceutil/trace.go:172","msg":"trace[1856364477] linearizableReadLoop","detail":"{readStateIndex:979; appliedIndex:979; }","duration":"264.374425ms","start":"2025-12-09T02:21:11.714317Z","end":"2025-12-09T02:21:11.978691Z","steps":["trace[1856364477] 'read index received'  (duration: 264.369541ms)","trace[1856364477] 'applied index is now lower than readState.Index'  (duration: 4.202µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-09T02:21:11.979005Z","caller":"traceutil/trace.go:172","msg":"trace[1327415031] transaction","detail":"{read_only:false; response_revision:875; number_of_response:1; }","duration":"356.708174ms","start":"2025-12-09T02:21:11.622282Z","end":"2025-12-09T02:21:11.978990Z","steps":["trace[1327415031] 'process raft request'  (duration: 356.502466ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-09T02:21:11.979254Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T02:21:11.622256Z","time spent":"356.919969ms","remote":"127.0.0.1:37012","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:874 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-12-09T02:21:11.980256Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"264.978502ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-09T02:21:11.980786Z","caller":"traceutil/trace.go:172","msg":"trace[2097505626] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:875; }","duration":"266.222384ms","start":"2025-12-09T02:21:11.714313Z","end":"2025-12-09T02:21:11.980535Z","steps":["trace[2097505626] 'agreement among raft nodes before linearized reading'  (duration: 264.957158ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T02:29:14.223676Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1125}
	{"level":"info","ts":"2025-12-09T02:29:14.251346Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1125,"took":"27.183548ms","hash":860527092,"current-db-size-bytes":3694592,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1699840,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-12-09T02:29:14.251470Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":860527092,"revision":1125,"compact-revision":-1}
	
	
	==> etcd [b12a876b1607656dcae43a6f788e50cffa4515f9e671385f84c9294e3f8ea253] <==
	{"level":"warn","ts":"2025-12-09T02:18:49.795725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:18:49.816240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:18:49.829924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:18:49.837887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:18:49.845703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:18:49.854827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T02:18:49.901744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37072","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-09T02:18:54.210475Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-09T02:18:54.210542Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-074400","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.13:2380"],"advertise-client-urls":["https://192.168.39.13:2379"]}
	{"level":"error","ts":"2025-12-09T02:18:54.212446Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-09T02:19:01.212667Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-09T02:19:01.213924Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-09T02:19:01.214248Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-09T02:19:01.214606Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-09T02:19:01.214721Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-09T02:19:01.214406Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.13:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-09T02:19:01.214761Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.13:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-09T02:19:01.214844Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.13:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T02:19:01.214433Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1d3fba3e6c6ecbcd","current-leader-member-id":"1d3fba3e6c6ecbcd"}
	{"level":"info","ts":"2025-12-09T02:19:01.214963Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-09T02:19:01.214986Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-09T02:19:01.221888Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.13:2380"}
	{"level":"error","ts":"2025-12-09T02:19:01.221977Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.13:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T02:19:01.222007Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.13:2380"}
	{"level":"info","ts":"2025-12-09T02:19:01.222014Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-074400","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.13:2380"],"advertise-client-urls":["https://192.168.39.13:2379"]}
	
	
	==> kernel <==
	 02:29:42 up 13 min,  0 users,  load average: 0.25, 0.39, 0.37
	Linux functional-074400 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [527eadb8601da5def17eaa2edb74967c026667ca94cc89d7396a212f7a334be8] <==
	I1209 02:19:15.653479       1 policy_source.go:248] refreshing policies
	I1209 02:19:15.681401       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 02:19:15.896322       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 02:19:16.416999       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1209 02:19:17.193048       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 02:19:17.241707       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1209 02:19:17.277368       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 02:19:17.286105       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 02:19:22.288967       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 02:19:22.299738       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 02:19:22.301479       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 02:19:33.939396       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.109.174.82"}
	I1209 02:19:40.704502       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.58.57"}
	I1209 02:19:46.055771       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.104.66"}
	I1209 02:20:19.017969       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.207.80"}
	E1209 02:20:54.883855       1 conn.go:339] Error on socket receive: read tcp 192.168.39.13:8441->192.168.39.1:39546: use of closed network connection
	E1209 02:21:11.311679       1 conn.go:339] Error on socket receive: read tcp 192.168.39.13:8441->192.168.39.1:55936: use of closed network connection
	E1209 02:21:11.459426       1 conn.go:339] Error on socket receive: read tcp 192.168.39.13:8441->192.168.39.1:55958: use of closed network connection
	E1209 02:21:12.271547       1 conn.go:339] Error on socket receive: read tcp 192.168.39.13:8441->192.168.39.1:55998: use of closed network connection
	I1209 02:21:12.974659       1 controller.go:667] quota admission added evaluator for: namespaces
	I1209 02:21:13.362332       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.92.119"}
	I1209 02:21:13.396332       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.88.83"}
	E1209 02:21:14.318038       1 conn.go:339] Error on socket receive: read tcp 192.168.39.13:8441->192.168.39.1:38120: use of closed network connection
	E1209 02:21:17.072662       1 conn.go:339] Error on socket receive: read tcp 192.168.39.13:8441->192.168.39.1:38188: use of closed network connection
	I1209 02:29:15.557630       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [4564cf1f6c21fbf6c388bc6c5703fdc330467e6a7b0e87256575ffbb8496510c] <==
	I1209 02:18:53.759401       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.759504       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.759565       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.761008       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.761195       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.761278       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.761339       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.761405       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.761835       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.761889       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.762212       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.762287       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.762362       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.763758       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.767043       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.768526       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.768854       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.769014       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.769302       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.773628       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.835652       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:53.835673       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1209 02:18:53.835677       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1209 02:18:53.850356       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:54.024457       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-controller-manager [5720f563ee852c04aade1c1dcfe0527e5d0b9cafff5f5324ea01a48641c2b879] <==
	I1209 02:19:18.782519       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:18.782651       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:18.782851       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:18.786354       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1209 02:19:18.786380       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:19:18.786386       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:18.786453       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:18.786695       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:18.788109       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-074400"
	I1209 02:19:18.788161       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1209 02:19:18.788214       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:18.788232       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:18.800647       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:19:18.802843       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:18.885650       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:18.885669       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1209 02:19:18.885673       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1209 02:19:18.901262       1 shared_informer.go:377] "Caches are synced"
	E1209 02:21:13.116722       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:21:13.123459       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:21:13.129856       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:21:13.142405       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:21:13.160655       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:21:13.181089       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1209 02:21:13.186394       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [14173d291caa4718370d24685ac59608d6db0097c2bb19631282e71389726769] <==
	I1209 02:19:16.376344       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:19:16.477454       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:16.478153       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.13"]
	E1209 02:19:16.478347       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:19:16.560522       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1209 02:19:16.560595       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 02:19:16.560616       1 server_linux.go:136] "Using iptables Proxier"
	I1209 02:19:16.584037       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:19:16.584852       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1209 02:19:16.584967       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:19:16.594721       1 config.go:200] "Starting service config controller"
	I1209 02:19:16.594848       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:19:16.595293       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:19:16.595401       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:19:16.595642       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:19:16.595666       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:19:16.599361       1 config.go:309] "Starting node config controller"
	I1209 02:19:16.599484       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:19:16.599506       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:19:16.695426       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 02:19:16.696148       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 02:19:16.696170       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [823c02139d722aeeb211b14581e8aa5f8644ac71dc817330ea18d811ea6d2be9] <==
	E1209 02:18:50.598930       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes \"functional-074400\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]"
	I1209 02:18:50.623250       1 shared_informer.go:377] "Caches are synced"
	I1209 02:18:50.623366       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.13"]
	E1209 02:18:50.623546       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 02:18:50.691204       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1209 02:18:50.691370       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 02:18:50.691447       1 server_linux.go:136] "Using iptables Proxier"
	I1209 02:18:50.707836       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 02:18:50.708643       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1209 02:18:50.708664       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:18:50.711481       1 config.go:106] "Starting endpoint slice config controller"
	I1209 02:18:50.711590       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 02:18:50.716157       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 02:18:50.716175       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 02:18:50.718528       1 config.go:200] "Starting service config controller"
	I1209 02:18:50.718557       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 02:18:50.719218       1 config.go:309] "Starting node config controller"
	I1209 02:18:50.720198       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 02:18:50.720323       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 02:18:50.811923       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1209 02:18:50.816271       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 02:18:50.819646       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [bacf40dd7a6ed51348aa2ab49f8b52e918b4dfd14fb9632d8e828165a44be415] <==
	I1209 02:18:48.669545       1 serving.go:386] Generated self-signed cert in-memory
	W1209 02:18:50.520599       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 02:18:50.521792       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 02:18:50.521850       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 02:18:50.521868       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 02:18:50.585519       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1209 02:18:50.585561       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 02:18:50.589889       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:18:50.589966       1 shared_informer.go:370] "Waiting for caches to sync"
	I1209 02:18:50.592258       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 02:18:50.592353       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 02:18:50.700431       1 shared_informer.go:377] "Caches are synced"
	I1209 02:19:01.312948       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1209 02:19:01.313214       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1209 02:19:01.313232       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1209 02:19:01.313331       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 02:19:01.313500       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1209 02:19:01.313516       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d7e187bd9a3a6318fa2d523b92b9013a408f816ace5db8c1c222c7793427524f] <==
	E1209 02:19:15.548746       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1209 02:19:15.548770       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1209 02:19:15.546661       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1209 02:19:15.548799       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1209 02:19:15.552362       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1209 02:19:15.555476       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1209 02:19:15.564497       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1209 02:19:15.564624       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1209 02:19:15.564620       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1209 02:19:15.564752       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1209 02:19:15.564840       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1209 02:19:15.564884       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1209 02:19:15.564936       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1209 02:19:15.564975       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1209 02:19:15.565195       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1209 02:19:15.565232       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1209 02:19:15.565344       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1209 02:19:15.565387       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1209 02:19:15.569414       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1209 02:19:15.572307       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1209 02:19:15.572574       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1209 02:19:15.572705       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1209 02:19:15.572795       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1209 02:19:15.572832       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	I1209 02:19:16.825606       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 09 02:28:53 functional-074400 kubelet[6677]: E1209 02:28:53.176267    6677 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765247333175738624  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240697}  inodes_used:{value:104}}"
	Dec 09 02:28:53 functional-074400 kubelet[6677]: E1209 02:28:53.176314    6677 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765247333175738624  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240697}  inodes_used:{value:104}}"
	Dec 09 02:29:00 functional-074400 kubelet[6677]: E1209 02:29:00.885914    6677 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-zbhnt" podUID="0fb77730-c093-4c34-b77c-749ed6480841"
	Dec 09 02:29:03 functional-074400 kubelet[6677]: E1209 02:29:03.180892    6677 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765247343180265318  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240697}  inodes_used:{value:104}}"
	Dec 09 02:29:03 functional-074400 kubelet[6677]: E1209 02:29:03.180932    6677 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765247343180265318  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240697}  inodes_used:{value:104}}"
	Dec 09 02:29:11 functional-074400 kubelet[6677]: E1209 02:29:11.886393    6677 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-zbhnt" podUID="0fb77730-c093-4c34-b77c-749ed6480841"
	Dec 09 02:29:13 functional-074400 kubelet[6677]: E1209 02:29:13.184123    6677 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765247353183774840  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240697}  inodes_used:{value:104}}"
	Dec 09 02:29:13 functional-074400 kubelet[6677]: E1209 02:29:13.184145    6677 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765247353183774840  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240697}  inodes_used:{value:104}}"
	Dec 09 02:29:17 functional-074400 kubelet[6677]: E1209 02:29:17.106799    6677 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 09 02:29:17 functional-074400 kubelet[6677]: E1209 02:29:17.106857    6677 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 09 02:29:17 functional-074400 kubelet[6677]: E1209 02:29:17.107312    6677 kuberuntime_manager.go:1664] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-b84665fb8-j58f5_kubernetes-dashboard(51aeba10-230e-4e7c-9eed-7e81b00f9578): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 09 02:29:17 functional-074400 kubelet[6677]: E1209 02:29:17.107353    6677 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-j58f5" podUID="51aeba10-230e-4e7c-9eed-7e81b00f9578"
	Dec 09 02:29:23 functional-074400 kubelet[6677]: E1209 02:29:23.186316    6677 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765247363186015019  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240697}  inodes_used:{value:104}}"
	Dec 09 02:29:23 functional-074400 kubelet[6677]: E1209 02:29:23.186357    6677 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765247363186015019  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240697}  inodes_used:{value:104}}"
	Dec 09 02:29:23 functional-074400 kubelet[6677]: E1209 02:29:23.886347    6677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-074400" containerName="kube-scheduler"
	Dec 09 02:29:26 functional-074400 kubelet[6677]: E1209 02:29:26.890260    6677 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-zbhnt" podUID="0fb77730-c093-4c34-b77c-749ed6480841"
	Dec 09 02:29:30 functional-074400 kubelet[6677]: E1209 02:29:30.885885    6677 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-j58f5" containerName="kubernetes-dashboard"
	Dec 09 02:29:30 functional-074400 kubelet[6677]: E1209 02:29:30.893160    6677 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-j58f5" podUID="51aeba10-230e-4e7c-9eed-7e81b00f9578"
	Dec 09 02:29:31 functional-074400 kubelet[6677]: E1209 02:29:31.884951    6677 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-jc7zv" containerName="coredns"
	Dec 09 02:29:33 functional-074400 kubelet[6677]: E1209 02:29:33.189495    6677 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765247373188708372  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240697}  inodes_used:{value:104}}"
	Dec 09 02:29:33 functional-074400 kubelet[6677]: E1209 02:29:33.189963    6677 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765247373188708372  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240697}  inodes_used:{value:104}}"
	Dec 09 02:29:35 functional-074400 kubelet[6677]: E1209 02:29:35.885416    6677 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-074400" containerName="kube-apiserver"
	Dec 09 02:29:37 functional-074400 kubelet[6677]: E1209 02:29:37.886346    6677 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-zbhnt" podUID="0fb77730-c093-4c34-b77c-749ed6480841"
	Dec 09 02:29:42 functional-074400 kubelet[6677]: E1209 02:29:42.888698    6677 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-j58f5" containerName="kubernetes-dashboard"
	Dec 09 02:29:42 functional-074400 kubelet[6677]: E1209 02:29:42.900618    6677 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-j58f5" podUID="51aeba10-230e-4e7c-9eed-7e81b00f9578"
	
	
	==> storage-provisioner [5cfeb83409be8ec96c0b53f9c541a123c60a81424ed708b316900cfc6bac7634] <==
	W1209 02:29:16.943509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:18.947321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:18.953289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:20.957268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:20.965917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:22.969515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:22.979031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:24.982480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:24.989191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:26.993590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:27.004401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:29.007586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:29.017515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:31.022444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:31.031690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:33.035398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:33.040997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:35.044821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:35.050583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:37.056336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:37.065294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:39.070262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:39.077370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:41.081580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 02:29:41.090899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [93888ae1c2ed9e59841961ffbd087abe1b960432ec14bd6ede69fe08b06f6528] <==
	I1209 02:18:47.839420       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1209 02:18:47.847561       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-074400 -n functional-074400
helpers_test.go:269: (dbg) Run:  kubectl --context functional-074400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-qkj2j hello-node-connect-9f67c86d4-zbhnt dashboard-metrics-scraper-5565989548-2cgfd kubernetes-dashboard-b84665fb8-j58f5
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-074400 describe pod busybox-mount hello-node-5758569b79-qkj2j hello-node-connect-9f67c86d4-zbhnt dashboard-metrics-scraper-5565989548-2cgfd kubernetes-dashboard-b84665fb8-j58f5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-074400 describe pod busybox-mount hello-node-5758569b79-qkj2j hello-node-connect-9f67c86d4-zbhnt dashboard-metrics-scraper-5565989548-2cgfd kubernetes-dashboard-b84665fb8-j58f5: exit status 1 (102.741995ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074400/192.168.39.13
	Start Time:       Tue, 09 Dec 2025 02:19:41 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://b89d37aeec53b9d6ee80b63c22598071f010cd29423aebcc64906de620467314
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 09 Dec 2025 02:20:13 +0000
	      Finished:     Tue, 09 Dec 2025 02:20:13 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tm5jg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-tm5jg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  10m    default-scheduler  Successfully assigned default/busybox-mount to functional-074400
	  Normal  Pulling    10m    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m30s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.253s (31.491s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m30s  kubelet            Container created
	  Normal  Started    9m30s  kubelet            Container started
	
	
	Name:             hello-node-5758569b79-qkj2j
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074400/192.168.39.13
	Start Time:       Tue, 09 Dec 2025 02:19:46 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p4hc6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-p4hc6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m57s                  default-scheduler  Successfully assigned default/hello-node-5758569b79-qkj2j to functional-074400
	  Warning  Failed     5m33s (x3 over 9m)     kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m57s (x4 over 9m)     kubelet            Error: ErrImagePull
	  Warning  Failed     2m57s                  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    100s (x11 over 8m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     100s (x11 over 8m59s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    87s (x5 over 9m57s)    kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-9f67c86d4-zbhnt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074400/192.168.39.13
	Start Time:       Tue, 09 Dec 2025 02:19:40 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qf9x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4qf9x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-zbhnt to functional-074400
	  Warning  Failed     9m32s                 kubelet            Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m33s (x2 over 6m4s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m5s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     56s (x5 over 9m32s)   kubelet            Error: ErrImagePull
	  Warning  Failed     56s (x2 over 8m10s)   kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    6s (x15 over 9m31s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     6s (x15 over 9m31s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-2cgfd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-j58f5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-074400 describe pod busybox-mount hello-node-5758569b79-qkj2j hello-node-connect-9f67c86d4-zbhnt dashboard-metrics-scraper-5565989548-2cgfd kubernetes-dashboard-b84665fb8-j58f5: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (603.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-074400 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-074400 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-qkj2j" [258ef2dd-6833-4fa8-a27c-7af7e20af1eb] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-074400 -n functional-074400
functional_test.go:1460: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-09 02:29:46.290800012 +0000 UTC m=+2049.884420108
functional_test.go:1460: (dbg) Run:  kubectl --context functional-074400 describe po hello-node-5758569b79-qkj2j -n default
functional_test.go:1460: (dbg) kubectl --context functional-074400 describe po hello-node-5758569b79-qkj2j -n default:
Name:             hello-node-5758569b79-qkj2j
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-074400/192.168.39.13
Start Time:       Tue, 09 Dec 2025 02:19:46 +0000
Labels:           app=hello-node
pod-template-hash=5758569b79
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-5758569b79
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p4hc6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-p4hc6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-5758569b79-qkj2j to functional-074400
Warning  Failed     5m36s (x3 over 9m3s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m (x4 over 9m3s)     kubelet            Error: ErrImagePull
Warning  Failed     3m                    kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    103s (x11 over 9m2s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     103s (x11 over 9m2s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    90s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-074400 logs hello-node-5758569b79-qkj2j -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-074400 logs hello-node-5758569b79-qkj2j -n default: exit status 1 (67.649733ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-5758569b79-qkj2j" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-074400 logs hello-node-5758569b79-qkj2j -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074400 service --namespace=default --https --url hello-node: exit status 115 (270.356336ms)

                                                
                                                
-- stdout --
	https://192.168.39.13:30976
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-074400 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074400 service hello-node --url --format={{.IP}}: exit status 115 (263.048301ms)

                                                
                                                
-- stdout --
	192.168.39.13
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-074400 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074400 service hello-node --url: exit status 115 (267.937214ms)

                                                
                                                
-- stdout --
	http://192.168.39.13:30976
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-074400 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.13:30976
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestPreload (166.21s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-500822 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1209 03:09:31.632467  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:09:39.455560  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-500822 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m35.29083034s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-500822 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-500822 image pull gcr.io/k8s-minikube/busybox: (2.270493518s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-500822
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-500822: (7.826773063s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-500822 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1209 03:11:12.464055  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:11:29.394744  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-500822 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (57.970784122s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-500822 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-09 03:11:37.78579129 +0000 UTC m=+4561.379411372
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-500822 -n test-preload-500822
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-500822 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-500822 logs -n 25: (1.108320989s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-999895 ssh -n multinode-999895-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-999895     │ jenkins │ v1.37.0 │ 09 Dec 25 02:57 UTC │ 09 Dec 25 02:57 UTC │
	│ ssh     │ multinode-999895 ssh -n multinode-999895 sudo cat /home/docker/cp-test_multinode-999895-m03_multinode-999895.txt                                          │ multinode-999895     │ jenkins │ v1.37.0 │ 09 Dec 25 02:57 UTC │ 09 Dec 25 02:57 UTC │
	│ cp      │ multinode-999895 cp multinode-999895-m03:/home/docker/cp-test.txt multinode-999895-m02:/home/docker/cp-test_multinode-999895-m03_multinode-999895-m02.txt │ multinode-999895     │ jenkins │ v1.37.0 │ 09 Dec 25 02:57 UTC │ 09 Dec 25 02:57 UTC │
	│ ssh     │ multinode-999895 ssh -n multinode-999895-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-999895     │ jenkins │ v1.37.0 │ 09 Dec 25 02:57 UTC │ 09 Dec 25 02:57 UTC │
	│ ssh     │ multinode-999895 ssh -n multinode-999895-m02 sudo cat /home/docker/cp-test_multinode-999895-m03_multinode-999895-m02.txt                                  │ multinode-999895     │ jenkins │ v1.37.0 │ 09 Dec 25 02:57 UTC │ 09 Dec 25 02:57 UTC │
	│ node    │ multinode-999895 node stop m03                                                                                                                            │ multinode-999895     │ jenkins │ v1.37.0 │ 09 Dec 25 02:57 UTC │ 09 Dec 25 02:57 UTC │
	│ node    │ multinode-999895 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-999895     │ jenkins │ v1.37.0 │ 09 Dec 25 02:57 UTC │ 09 Dec 25 02:57 UTC │
	│ node    │ list -p multinode-999895                                                                                                                                  │ multinode-999895     │ jenkins │ v1.37.0 │ 09 Dec 25 02:57 UTC │                     │
	│ stop    │ -p multinode-999895                                                                                                                                       │ multinode-999895     │ jenkins │ v1.37.0 │ 09 Dec 25 02:57 UTC │ 09 Dec 25 03:00 UTC │
	│ start   │ -p multinode-999895 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-999895     │ jenkins │ v1.37.0 │ 09 Dec 25 03:00 UTC │ 09 Dec 25 03:03 UTC │
	│ node    │ list -p multinode-999895                                                                                                                                  │ multinode-999895     │ jenkins │ v1.37.0 │ 09 Dec 25 03:03 UTC │                     │
	│ node    │ multinode-999895 node delete m03                                                                                                                          │ multinode-999895     │ jenkins │ v1.37.0 │ 09 Dec 25 03:03 UTC │ 09 Dec 25 03:03 UTC │
	│ stop    │ multinode-999895 stop                                                                                                                                     │ multinode-999895     │ jenkins │ v1.37.0 │ 09 Dec 25 03:03 UTC │ 09 Dec 25 03:06 UTC │
	│ start   │ -p multinode-999895 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-999895     │ jenkins │ v1.37.0 │ 09 Dec 25 03:06 UTC │ 09 Dec 25 03:08 UTC │
	│ node    │ list -p multinode-999895                                                                                                                                  │ multinode-999895     │ jenkins │ v1.37.0 │ 09 Dec 25 03:08 UTC │                     │
	│ start   │ -p multinode-999895-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-999895-m02 │ jenkins │ v1.37.0 │ 09 Dec 25 03:08 UTC │                     │
	│ start   │ -p multinode-999895-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-999895-m03 │ jenkins │ v1.37.0 │ 09 Dec 25 03:08 UTC │ 09 Dec 25 03:08 UTC │
	│ node    │ add -p multinode-999895                                                                                                                                   │ multinode-999895     │ jenkins │ v1.37.0 │ 09 Dec 25 03:08 UTC │                     │
	│ delete  │ -p multinode-999895-m03                                                                                                                                   │ multinode-999895-m03 │ jenkins │ v1.37.0 │ 09 Dec 25 03:08 UTC │ 09 Dec 25 03:08 UTC │
	│ delete  │ -p multinode-999895                                                                                                                                       │ multinode-999895     │ jenkins │ v1.37.0 │ 09 Dec 25 03:08 UTC │ 09 Dec 25 03:08 UTC │
	│ start   │ -p test-preload-500822 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-500822  │ jenkins │ v1.37.0 │ 09 Dec 25 03:08 UTC │ 09 Dec 25 03:10 UTC │
	│ image   │ test-preload-500822 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-500822  │ jenkins │ v1.37.0 │ 09 Dec 25 03:10 UTC │ 09 Dec 25 03:10 UTC │
	│ stop    │ -p test-preload-500822                                                                                                                                    │ test-preload-500822  │ jenkins │ v1.37.0 │ 09 Dec 25 03:10 UTC │ 09 Dec 25 03:10 UTC │
	│ start   │ -p test-preload-500822 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-500822  │ jenkins │ v1.37.0 │ 09 Dec 25 03:10 UTC │ 09 Dec 25 03:11 UTC │
	│ image   │ test-preload-500822 image list                                                                                                                            │ test-preload-500822  │ jenkins │ v1.37.0 │ 09 Dec 25 03:11 UTC │ 09 Dec 25 03:11 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 03:10:39
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 03:10:39.674346  289321 out.go:360] Setting OutFile to fd 1 ...
	I1209 03:10:39.674572  289321 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:10:39.674581  289321 out.go:374] Setting ErrFile to fd 2...
	I1209 03:10:39.674585  289321 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:10:39.674796  289321 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 03:10:39.675275  289321 out.go:368] Setting JSON to false
	I1209 03:10:39.676191  289321 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":31990,"bootTime":1765217850,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 03:10:39.676251  289321 start.go:143] virtualization: kvm guest
	I1209 03:10:39.678512  289321 out.go:179] * [test-preload-500822] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 03:10:39.679972  289321 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 03:10:39.679984  289321 notify.go:221] Checking for updates...
	I1209 03:10:39.682657  289321 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:10:39.683855  289321 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 03:10:39.685281  289321 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	I1209 03:10:39.686815  289321 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 03:10:39.688132  289321 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:10:39.689879  289321 config.go:182] Loaded profile config "test-preload-500822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 03:10:39.690477  289321 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 03:10:39.726372  289321 out.go:179] * Using the kvm2 driver based on existing profile
	I1209 03:10:39.727755  289321 start.go:309] selected driver: kvm2
	I1209 03:10:39.727776  289321 start.go:927] validating driver "kvm2" against &{Name:test-preload-500822 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.2 ClusterName:test-preload-500822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:10:39.727921  289321 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:10:39.728994  289321 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:10:39.729026  289321 cni.go:84] Creating CNI manager for ""
	I1209 03:10:39.729083  289321 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 03:10:39.729154  289321 start.go:353] cluster config:
	{Name:test-preload-500822 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:test-preload-500822 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:10:39.729285  289321 iso.go:125] acquiring lock: {Name:mk5e3a22cdf6cd1ed24c9a04adaf1049140c04b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:10:39.731679  289321 out.go:179] * Starting "test-preload-500822" primary control-plane node in "test-preload-500822" cluster
	I1209 03:10:39.732906  289321 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 03:10:39.732941  289321 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 03:10:39.732949  289321 cache.go:65] Caching tarball of preloaded images
	I1209 03:10:39.733075  289321 preload.go:238] Found /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 03:10:39.733093  289321 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 03:10:39.733199  289321 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/test-preload-500822/config.json ...
	I1209 03:10:39.733450  289321 start.go:360] acquireMachinesLock for test-preload-500822: {Name:mkb4bf4bc2a6ad90b53de9be214957ca6809cd32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:10:39.733502  289321 start.go:364] duration metric: took 29.519µs to acquireMachinesLock for "test-preload-500822"
	I1209 03:10:39.733521  289321 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:10:39.733533  289321 fix.go:54] fixHost starting: 
	I1209 03:10:39.735694  289321 fix.go:112] recreateIfNeeded on test-preload-500822: state=Stopped err=<nil>
	W1209 03:10:39.735731  289321 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:10:39.737436  289321 out.go:252] * Restarting existing kvm2 VM for "test-preload-500822" ...
	I1209 03:10:39.737472  289321 main.go:143] libmachine: starting domain...
	I1209 03:10:39.737489  289321 main.go:143] libmachine: ensuring networks are active...
	I1209 03:10:39.738388  289321 main.go:143] libmachine: Ensuring network default is active
	I1209 03:10:39.738781  289321 main.go:143] libmachine: Ensuring network mk-test-preload-500822 is active
	I1209 03:10:39.739236  289321 main.go:143] libmachine: getting domain XML...
	I1209 03:10:39.740513  289321 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-500822</name>
	  <uuid>a92d72f1-7dd4-4ccf-831b-6848b5682f7e</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22081-254936/.minikube/machines/test-preload-500822/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22081-254936/.minikube/machines/test-preload-500822/test-preload-500822.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:66:5c:c0'/>
	      <source network='mk-test-preload-500822'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:ad:45:6f'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1209 03:10:41.037156  289321 main.go:143] libmachine: waiting for domain to start...
	I1209 03:10:41.038593  289321 main.go:143] libmachine: domain is now running
	I1209 03:10:41.038617  289321 main.go:143] libmachine: waiting for IP...
	I1209 03:10:41.039705  289321 main.go:143] libmachine: domain test-preload-500822 has defined MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:41.040478  289321 main.go:143] libmachine: domain test-preload-500822 has current primary IP address 192.168.39.162 and MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:41.040499  289321 main.go:143] libmachine: found domain IP: 192.168.39.162
	I1209 03:10:41.040509  289321 main.go:143] libmachine: reserving static IP address...
	I1209 03:10:41.041115  289321 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-500822", mac: "52:54:00:66:5c:c0", ip: "192.168.39.162"} in network mk-test-preload-500822: {Iface:virbr1 ExpiryTime:2025-12-09 04:09:10 +0000 UTC Type:0 Mac:52:54:00:66:5c:c0 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:test-preload-500822 Clientid:01:52:54:00:66:5c:c0}
	I1209 03:10:41.041154  289321 main.go:143] libmachine: skip adding static IP to network mk-test-preload-500822 - found existing host DHCP lease matching {name: "test-preload-500822", mac: "52:54:00:66:5c:c0", ip: "192.168.39.162"}
	I1209 03:10:41.041171  289321 main.go:143] libmachine: reserved static IP address 192.168.39.162 for domain test-preload-500822
	I1209 03:10:41.041183  289321 main.go:143] libmachine: waiting for SSH...
	I1209 03:10:41.041216  289321 main.go:143] libmachine: Getting to WaitForSSH function...
	I1209 03:10:41.043912  289321 main.go:143] libmachine: domain test-preload-500822 has defined MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:41.044420  289321 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:5c:c0", ip: ""} in network mk-test-preload-500822: {Iface:virbr1 ExpiryTime:2025-12-09 04:09:10 +0000 UTC Type:0 Mac:52:54:00:66:5c:c0 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:test-preload-500822 Clientid:01:52:54:00:66:5c:c0}
	I1209 03:10:41.044455  289321 main.go:143] libmachine: domain test-preload-500822 has defined IP address 192.168.39.162 and MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:41.044668  289321 main.go:143] libmachine: Using SSH client type: native
	I1209 03:10:41.045039  289321 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1209 03:10:41.045057  289321 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1209 03:10:44.112127  289321 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1209 03:10:50.192112  289321 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1209 03:10:53.194620  289321 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: connection refused
	I1209 03:10:56.304121  289321 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 03:10:56.307892  289321 main.go:143] libmachine: domain test-preload-500822 has defined MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:56.308380  289321 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:5c:c0", ip: ""} in network mk-test-preload-500822: {Iface:virbr1 ExpiryTime:2025-12-09 04:10:52 +0000 UTC Type:0 Mac:52:54:00:66:5c:c0 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:test-preload-500822 Clientid:01:52:54:00:66:5c:c0}
	I1209 03:10:56.308413  289321 main.go:143] libmachine: domain test-preload-500822 has defined IP address 192.168.39.162 and MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:56.308902  289321 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/test-preload-500822/config.json ...
	I1209 03:10:56.309153  289321 machine.go:94] provisionDockerMachine start ...
	I1209 03:10:56.311922  289321 main.go:143] libmachine: domain test-preload-500822 has defined MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:56.312384  289321 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:5c:c0", ip: ""} in network mk-test-preload-500822: {Iface:virbr1 ExpiryTime:2025-12-09 04:10:52 +0000 UTC Type:0 Mac:52:54:00:66:5c:c0 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:test-preload-500822 Clientid:01:52:54:00:66:5c:c0}
	I1209 03:10:56.312413  289321 main.go:143] libmachine: domain test-preload-500822 has defined IP address 192.168.39.162 and MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:56.312608  289321 main.go:143] libmachine: Using SSH client type: native
	I1209 03:10:56.312967  289321 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1209 03:10:56.312984  289321 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 03:10:56.422355  289321 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 03:10:56.422390  289321 buildroot.go:166] provisioning hostname "test-preload-500822"
	I1209 03:10:56.425876  289321 main.go:143] libmachine: domain test-preload-500822 has defined MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:56.426338  289321 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:5c:c0", ip: ""} in network mk-test-preload-500822: {Iface:virbr1 ExpiryTime:2025-12-09 04:10:52 +0000 UTC Type:0 Mac:52:54:00:66:5c:c0 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:test-preload-500822 Clientid:01:52:54:00:66:5c:c0}
	I1209 03:10:56.426378  289321 main.go:143] libmachine: domain test-preload-500822 has defined IP address 192.168.39.162 and MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:56.426560  289321 main.go:143] libmachine: Using SSH client type: native
	I1209 03:10:56.426788  289321 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1209 03:10:56.426805  289321 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-500822 && echo "test-preload-500822" | sudo tee /etc/hostname
	I1209 03:10:56.553044  289321 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-500822
	
	I1209 03:10:56.556332  289321 main.go:143] libmachine: domain test-preload-500822 has defined MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:56.556704  289321 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:5c:c0", ip: ""} in network mk-test-preload-500822: {Iface:virbr1 ExpiryTime:2025-12-09 04:10:52 +0000 UTC Type:0 Mac:52:54:00:66:5c:c0 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:test-preload-500822 Clientid:01:52:54:00:66:5c:c0}
	I1209 03:10:56.556741  289321 main.go:143] libmachine: domain test-preload-500822 has defined IP address 192.168.39.162 and MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:56.556935  289321 main.go:143] libmachine: Using SSH client type: native
	I1209 03:10:56.557218  289321 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1209 03:10:56.557243  289321 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-500822' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-500822/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-500822' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 03:10:56.676137  289321 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 03:10:56.676195  289321 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22081-254936/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-254936/.minikube}
	I1209 03:10:56.676227  289321 buildroot.go:174] setting up certificates
	I1209 03:10:56.676244  289321 provision.go:84] configureAuth start
	I1209 03:10:56.679316  289321 main.go:143] libmachine: domain test-preload-500822 has defined MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:56.679808  289321 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:5c:c0", ip: ""} in network mk-test-preload-500822: {Iface:virbr1 ExpiryTime:2025-12-09 04:10:52 +0000 UTC Type:0 Mac:52:54:00:66:5c:c0 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:test-preload-500822 Clientid:01:52:54:00:66:5c:c0}
	I1209 03:10:56.679861  289321 main.go:143] libmachine: domain test-preload-500822 has defined IP address 192.168.39.162 and MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:56.682769  289321 main.go:143] libmachine: domain test-preload-500822 has defined MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:56.683270  289321 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:5c:c0", ip: ""} in network mk-test-preload-500822: {Iface:virbr1 ExpiryTime:2025-12-09 04:10:52 +0000 UTC Type:0 Mac:52:54:00:66:5c:c0 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:test-preload-500822 Clientid:01:52:54:00:66:5c:c0}
	I1209 03:10:56.683299  289321 main.go:143] libmachine: domain test-preload-500822 has defined IP address 192.168.39.162 and MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:56.683477  289321 provision.go:143] copyHostCerts
	I1209 03:10:56.683575  289321 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-254936/.minikube/ca.pem, removing ...
	I1209 03:10:56.683591  289321 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-254936/.minikube/ca.pem
	I1209 03:10:56.683689  289321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/ca.pem (1078 bytes)
	I1209 03:10:56.683796  289321 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-254936/.minikube/cert.pem, removing ...
	I1209 03:10:56.683805  289321 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-254936/.minikube/cert.pem
	I1209 03:10:56.683858  289321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/cert.pem (1123 bytes)
	I1209 03:10:56.683954  289321 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-254936/.minikube/key.pem, removing ...
	I1209 03:10:56.683963  289321 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-254936/.minikube/key.pem
	I1209 03:10:56.684002  289321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/key.pem (1679 bytes)
	I1209 03:10:56.684058  289321 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca-key.pem org=jenkins.test-preload-500822 san=[127.0.0.1 192.168.39.162 localhost minikube test-preload-500822]
	I1209 03:10:56.792104  289321 provision.go:177] copyRemoteCerts
	I1209 03:10:56.792172  289321 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 03:10:56.795379  289321 main.go:143] libmachine: domain test-preload-500822 has defined MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:56.795886  289321 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:5c:c0", ip: ""} in network mk-test-preload-500822: {Iface:virbr1 ExpiryTime:2025-12-09 04:10:52 +0000 UTC Type:0 Mac:52:54:00:66:5c:c0 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:test-preload-500822 Clientid:01:52:54:00:66:5c:c0}
	I1209 03:10:56.795918  289321 main.go:143] libmachine: domain test-preload-500822 has defined IP address 192.168.39.162 and MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:56.796099  289321 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/test-preload-500822/id_rsa Username:docker}
	I1209 03:10:56.881737  289321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 03:10:56.916317  289321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 03:10:56.949297  289321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1209 03:10:56.988262  289321 provision.go:87] duration metric: took 311.99851ms to configureAuth
	I1209 03:10:56.988297  289321 buildroot.go:189] setting minikube options for container-runtime
	I1209 03:10:56.988536  289321 config.go:182] Loaded profile config "test-preload-500822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 03:10:56.991995  289321 main.go:143] libmachine: domain test-preload-500822 has defined MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:56.992598  289321 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:5c:c0", ip: ""} in network mk-test-preload-500822: {Iface:virbr1 ExpiryTime:2025-12-09 04:10:52 +0000 UTC Type:0 Mac:52:54:00:66:5c:c0 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:test-preload-500822 Clientid:01:52:54:00:66:5c:c0}
	I1209 03:10:56.992641  289321 main.go:143] libmachine: domain test-preload-500822 has defined IP address 192.168.39.162 and MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:56.992951  289321 main.go:143] libmachine: Using SSH client type: native
	I1209 03:10:56.993201  289321 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1209 03:10:56.993224  289321 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 03:10:57.251877  289321 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 03:10:57.251914  289321 machine.go:97] duration metric: took 942.746293ms to provisionDockerMachine
	I1209 03:10:57.251929  289321 start.go:293] postStartSetup for "test-preload-500822" (driver="kvm2")
	I1209 03:10:57.251940  289321 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 03:10:57.252031  289321 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 03:10:57.255145  289321 main.go:143] libmachine: domain test-preload-500822 has defined MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:57.255676  289321 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:5c:c0", ip: ""} in network mk-test-preload-500822: {Iface:virbr1 ExpiryTime:2025-12-09 04:10:52 +0000 UTC Type:0 Mac:52:54:00:66:5c:c0 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:test-preload-500822 Clientid:01:52:54:00:66:5c:c0}
	I1209 03:10:57.255709  289321 main.go:143] libmachine: domain test-preload-500822 has defined IP address 192.168.39.162 and MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:57.255903  289321 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/test-preload-500822/id_rsa Username:docker}
	I1209 03:10:57.344547  289321 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 03:10:57.351340  289321 info.go:137] Remote host: Buildroot 2025.02
	I1209 03:10:57.351372  289321 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-254936/.minikube/addons for local assets ...
	I1209 03:10:57.351540  289321 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-254936/.minikube/files for local assets ...
	I1209 03:10:57.351652  289321 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-254936/.minikube/files/etc/ssl/certs/2588542.pem -> 2588542.pem in /etc/ssl/certs
	I1209 03:10:57.351766  289321 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 03:10:57.366153  289321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/files/etc/ssl/certs/2588542.pem --> /etc/ssl/certs/2588542.pem (1708 bytes)
	I1209 03:10:57.402452  289321 start.go:296] duration metric: took 150.50375ms for postStartSetup
	I1209 03:10:57.402504  289321 fix.go:56] duration metric: took 17.668971384s for fixHost
	I1209 03:10:57.405690  289321 main.go:143] libmachine: domain test-preload-500822 has defined MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:57.406197  289321 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:5c:c0", ip: ""} in network mk-test-preload-500822: {Iface:virbr1 ExpiryTime:2025-12-09 04:10:52 +0000 UTC Type:0 Mac:52:54:00:66:5c:c0 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:test-preload-500822 Clientid:01:52:54:00:66:5c:c0}
	I1209 03:10:57.406232  289321 main.go:143] libmachine: domain test-preload-500822 has defined IP address 192.168.39.162 and MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:57.406448  289321 main.go:143] libmachine: Using SSH client type: native
	I1209 03:10:57.406703  289321 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1209 03:10:57.406717  289321 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1209 03:10:57.515154  289321 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765249857.477540570
	
	I1209 03:10:57.515192  289321 fix.go:216] guest clock: 1765249857.477540570
	I1209 03:10:57.515200  289321 fix.go:229] Guest: 2025-12-09 03:10:57.47754057 +0000 UTC Remote: 2025-12-09 03:10:57.402508228 +0000 UTC m=+17.779439663 (delta=75.032342ms)
	I1209 03:10:57.515221  289321 fix.go:200] guest clock delta is within tolerance: 75.032342ms
	I1209 03:10:57.515228  289321 start.go:83] releasing machines lock for "test-preload-500822", held for 17.78171872s
	I1209 03:10:57.518981  289321 main.go:143] libmachine: domain test-preload-500822 has defined MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:57.519437  289321 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:5c:c0", ip: ""} in network mk-test-preload-500822: {Iface:virbr1 ExpiryTime:2025-12-09 04:10:52 +0000 UTC Type:0 Mac:52:54:00:66:5c:c0 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:test-preload-500822 Clientid:01:52:54:00:66:5c:c0}
	I1209 03:10:57.519470  289321 main.go:143] libmachine: domain test-preload-500822 has defined IP address 192.168.39.162 and MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:57.520226  289321 ssh_runner.go:195] Run: cat /version.json
	I1209 03:10:57.520295  289321 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 03:10:57.523456  289321 main.go:143] libmachine: domain test-preload-500822 has defined MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:57.523597  289321 main.go:143] libmachine: domain test-preload-500822 has defined MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:57.523911  289321 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:5c:c0", ip: ""} in network mk-test-preload-500822: {Iface:virbr1 ExpiryTime:2025-12-09 04:10:52 +0000 UTC Type:0 Mac:52:54:00:66:5c:c0 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:test-preload-500822 Clientid:01:52:54:00:66:5c:c0}
	I1209 03:10:57.523951  289321 main.go:143] libmachine: domain test-preload-500822 has defined IP address 192.168.39.162 and MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:57.524052  289321 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:5c:c0", ip: ""} in network mk-test-preload-500822: {Iface:virbr1 ExpiryTime:2025-12-09 04:10:52 +0000 UTC Type:0 Mac:52:54:00:66:5c:c0 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:test-preload-500822 Clientid:01:52:54:00:66:5c:c0}
	I1209 03:10:57.524086  289321 main.go:143] libmachine: domain test-preload-500822 has defined IP address 192.168.39.162 and MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:57.524094  289321 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/test-preload-500822/id_rsa Username:docker}
	I1209 03:10:57.524336  289321 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/test-preload-500822/id_rsa Username:docker}
	I1209 03:10:57.606716  289321 ssh_runner.go:195] Run: systemctl --version
	I1209 03:10:57.634319  289321 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 03:10:57.787635  289321 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 03:10:57.796739  289321 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 03:10:57.796819  289321 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 03:10:57.822433  289321 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 03:10:57.822477  289321 start.go:496] detecting cgroup driver to use...
	I1209 03:10:57.822562  289321 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 03:10:57.845104  289321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 03:10:57.864379  289321 docker.go:218] disabling cri-docker service (if available) ...
	I1209 03:10:57.864448  289321 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 03:10:57.884799  289321 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 03:10:57.904256  289321 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 03:10:58.059263  289321 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 03:10:58.279049  289321 docker.go:234] disabling docker service ...
	I1209 03:10:58.279122  289321 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 03:10:58.296996  289321 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 03:10:58.313557  289321 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 03:10:58.481787  289321 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 03:10:58.626442  289321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 03:10:58.643935  289321 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 03:10:58.670488  289321 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 03:10:58.670557  289321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:10:58.684675  289321 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 03:10:58.684788  289321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:10:58.698591  289321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:10:58.712563  289321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:10:58.726558  289321 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 03:10:58.741024  289321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:10:58.754820  289321 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:10:58.777631  289321 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:10:58.791133  289321 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 03:10:58.802470  289321 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 03:10:58.802600  289321 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 03:10:58.824925  289321 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 03:10:58.837974  289321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:10:58.983954  289321 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 03:10:59.101782  289321 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 03:10:59.101946  289321 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 03:10:59.107938  289321 start.go:564] Will wait 60s for crictl version
	I1209 03:10:59.108001  289321 ssh_runner.go:195] Run: which crictl
	I1209 03:10:59.112602  289321 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 03:10:59.152022  289321 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 03:10:59.152119  289321 ssh_runner.go:195] Run: crio --version
	I1209 03:10:59.185146  289321 ssh_runner.go:195] Run: crio --version
	I1209 03:10:59.218103  289321 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1209 03:10:59.222242  289321 main.go:143] libmachine: domain test-preload-500822 has defined MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:59.222721  289321 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:5c:c0", ip: ""} in network mk-test-preload-500822: {Iface:virbr1 ExpiryTime:2025-12-09 04:10:52 +0000 UTC Type:0 Mac:52:54:00:66:5c:c0 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:test-preload-500822 Clientid:01:52:54:00:66:5c:c0}
	I1209 03:10:59.222755  289321 main.go:143] libmachine: domain test-preload-500822 has defined IP address 192.168.39.162 and MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:10:59.223064  289321 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 03:10:59.228327  289321 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 03:10:59.244606  289321 kubeadm.go:884] updating cluster {Name:test-preload-500822 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.2 ClusterName:test-preload-500822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 03:10:59.244805  289321 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 03:10:59.244886  289321 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 03:10:59.283180  289321 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1209 03:10:59.283258  289321 ssh_runner.go:195] Run: which lz4
	I1209 03:10:59.288171  289321 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 03:10:59.293238  289321 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 03:10:59.293274  289321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1209 03:11:00.739078  289321 crio.go:462] duration metric: took 1.450937654s to copy over tarball
	I1209 03:11:00.739174  289321 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 03:11:02.291498  289321 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.552273051s)
	I1209 03:11:02.291536  289321 crio.go:469] duration metric: took 1.552423651s to extract the tarball
	I1209 03:11:02.291545  289321 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 03:11:02.330233  289321 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 03:11:02.369898  289321 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 03:11:02.369928  289321 cache_images.go:86] Images are preloaded, skipping loading
	I1209 03:11:02.369937  289321 kubeadm.go:935] updating node { 192.168.39.162 8443 v1.34.2 crio true true} ...
	I1209 03:11:02.370050  289321 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-500822 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:test-preload-500822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 03:11:02.370148  289321 ssh_runner.go:195] Run: crio config
	I1209 03:11:02.421363  289321 cni.go:84] Creating CNI manager for ""
	I1209 03:11:02.421393  289321 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 03:11:02.421413  289321 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 03:11:02.421443  289321 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-500822 NodeName:test-preload-500822 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 03:11:02.421600  289321 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-500822"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.162"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 03:11:02.421687  289321 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 03:11:02.436027  289321 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 03:11:02.436113  289321 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 03:11:02.449815  289321 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1209 03:11:02.475892  289321 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 03:11:02.499334  289321 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1209 03:11:02.523566  289321 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I1209 03:11:02.528481  289321 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 03:11:02.546451  289321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:11:02.694060  289321 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 03:11:02.717229  289321 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/test-preload-500822 for IP: 192.168.39.162
	I1209 03:11:02.717258  289321 certs.go:195] generating shared ca certs ...
	I1209 03:11:02.717309  289321 certs.go:227] acquiring lock for ca certs: {Name:mk538e8c05758246ce904354c7e7ace78887d181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:11:02.717478  289321 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-254936/.minikube/ca.key
	I1209 03:11:02.717570  289321 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.key
	I1209 03:11:02.717586  289321 certs.go:257] generating profile certs ...
	I1209 03:11:02.717681  289321 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/test-preload-500822/client.key
	I1209 03:11:02.717743  289321 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/test-preload-500822/apiserver.key.238538f7
	I1209 03:11:02.717781  289321 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/test-preload-500822/proxy-client.key
	I1209 03:11:02.717936  289321 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/258854.pem (1338 bytes)
	W1209 03:11:02.717971  289321 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-254936/.minikube/certs/258854_empty.pem, impossibly tiny 0 bytes
	I1209 03:11:02.717978  289321 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 03:11:02.718003  289321 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem (1078 bytes)
	I1209 03:11:02.718024  289321 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem (1123 bytes)
	I1209 03:11:02.718046  289321 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/key.pem (1679 bytes)
	I1209 03:11:02.718087  289321 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/files/etc/ssl/certs/2588542.pem (1708 bytes)
	I1209 03:11:02.718694  289321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 03:11:02.761019  289321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 03:11:02.794198  289321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 03:11:02.831161  289321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 03:11:02.865708  289321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/test-preload-500822/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1209 03:11:02.900423  289321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/test-preload-500822/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 03:11:02.933772  289321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/test-preload-500822/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 03:11:02.966908  289321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/test-preload-500822/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 03:11:03.000439  289321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/files/etc/ssl/certs/2588542.pem --> /usr/share/ca-certificates/2588542.pem (1708 bytes)
	I1209 03:11:03.033813  289321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 03:11:03.067899  289321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/certs/258854.pem --> /usr/share/ca-certificates/258854.pem (1338 bytes)
	I1209 03:11:03.100423  289321 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 03:11:03.123593  289321 ssh_runner.go:195] Run: openssl version
	I1209 03:11:03.130867  289321 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/258854.pem
	I1209 03:11:03.143931  289321 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/258854.pem /etc/ssl/certs/258854.pem
	I1209 03:11:03.156602  289321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/258854.pem
	I1209 03:11:03.163078  289321 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 02:16 /usr/share/ca-certificates/258854.pem
	I1209 03:11:03.163151  289321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/258854.pem
	I1209 03:11:03.171722  289321 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 03:11:03.185180  289321 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/258854.pem /etc/ssl/certs/51391683.0
	I1209 03:11:03.198696  289321 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2588542.pem
	I1209 03:11:03.213182  289321 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2588542.pem /etc/ssl/certs/2588542.pem
	I1209 03:11:03.226877  289321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2588542.pem
	I1209 03:11:03.233181  289321 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 02:16 /usr/share/ca-certificates/2588542.pem
	I1209 03:11:03.233276  289321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2588542.pem
	I1209 03:11:03.242337  289321 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 03:11:03.258083  289321 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2588542.pem /etc/ssl/certs/3ec20f2e.0
	I1209 03:11:03.274072  289321 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 03:11:03.289160  289321 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 03:11:03.302423  289321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 03:11:03.308560  289321 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1209 03:11:03.308630  289321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 03:11:03.316663  289321 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 03:11:03.332291  289321 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 03:11:03.348329  289321 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 03:11:03.354852  289321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 03:11:03.363467  289321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 03:11:03.372447  289321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 03:11:03.381581  289321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 03:11:03.390141  289321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 03:11:03.398994  289321 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 03:11:03.407724  289321 kubeadm.go:401] StartCluster: {Name:test-preload-500822 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.2 ClusterName:test-preload-500822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:11:03.407813  289321 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 03:11:03.407895  289321 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 03:11:03.470350  289321 cri.go:89] found id: ""
	I1209 03:11:03.470427  289321 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 03:11:03.488093  289321 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 03:11:03.488123  289321 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 03:11:03.488181  289321 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 03:11:03.503969  289321 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 03:11:03.504437  289321 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-500822" does not appear in /home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 03:11:03.504548  289321 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-254936/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-500822" cluster setting kubeconfig missing "test-preload-500822" context setting]
	I1209 03:11:03.504854  289321 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/kubeconfig: {Name:mkaafbe94dbea876978b17d37022d815642e1aad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:11:03.505407  289321 kapi.go:59] client config for test-preload-500822: &rest.Config{Host:"https://192.168.39.162:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-254936/.minikube/profiles/test-preload-500822/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-254936/.minikube/profiles/test-preload-500822/client.key", CAFile:"/home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28162e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 03:11:03.505879  289321 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1209 03:11:03.505902  289321 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 03:11:03.505910  289321 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1209 03:11:03.505920  289321 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 03:11:03.505925  289321 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1209 03:11:03.506315  289321 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 03:11:03.518893  289321 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.162
	I1209 03:11:03.518923  289321 kubeadm.go:1161] stopping kube-system containers ...
	I1209 03:11:03.518938  289321 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 03:11:03.518987  289321 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 03:11:03.558842  289321 cri.go:89] found id: ""
	I1209 03:11:03.558943  289321 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 03:11:03.579532  289321 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 03:11:03.592551  289321 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 03:11:03.592575  289321 kubeadm.go:158] found existing configuration files:
	
	I1209 03:11:03.592627  289321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 03:11:03.604248  289321 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 03:11:03.604313  289321 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 03:11:03.616765  289321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 03:11:03.629092  289321 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 03:11:03.629158  289321 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 03:11:03.641767  289321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 03:11:03.654390  289321 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 03:11:03.654454  289321 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 03:11:03.667556  289321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 03:11:03.680006  289321 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 03:11:03.680075  289321 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 03:11:03.692930  289321 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 03:11:03.705844  289321 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:11:03.763747  289321 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:11:05.640498  289321 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.876709615s)
	I1209 03:11:05.640593  289321 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:11:05.894973  289321 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:11:05.976750  289321 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:11:06.068785  289321 api_server.go:52] waiting for apiserver process to appear ...
	I1209 03:11:06.068920  289321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:11:06.569903  289321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:11:07.069523  289321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:11:07.569134  289321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:11:07.592443  289321 api_server.go:72] duration metric: took 1.523675312s to wait for apiserver process to appear ...
	I1209 03:11:07.592479  289321 api_server.go:88] waiting for apiserver healthz status ...
	I1209 03:11:07.592512  289321 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1209 03:11:09.388708  289321 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 03:11:09.388749  289321 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 03:11:09.388771  289321 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1209 03:11:09.496939  289321 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 03:11:09.496999  289321 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 03:11:09.593393  289321 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1209 03:11:09.598612  289321 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 03:11:09.598648  289321 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 03:11:10.092911  289321 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1209 03:11:10.106949  289321 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 03:11:10.106984  289321 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 03:11:10.592641  289321 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1209 03:11:10.600326  289321 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 03:11:10.600359  289321 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 03:11:11.092995  289321 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1209 03:11:11.098308  289321 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I1209 03:11:11.108535  289321 api_server.go:141] control plane version: v1.34.2
	I1209 03:11:11.108573  289321 api_server.go:131] duration metric: took 3.516085609s to wait for apiserver health ...
	I1209 03:11:11.108587  289321 cni.go:84] Creating CNI manager for ""
	I1209 03:11:11.108596  289321 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 03:11:11.110715  289321 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 03:11:11.112104  289321 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 03:11:11.144525  289321 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 03:11:11.181027  289321 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 03:11:11.187156  289321 system_pods.go:59] 5 kube-system pods found
	I1209 03:11:11.187197  289321 system_pods.go:61] "coredns-66bc5c9577-d99xm" [c374bc36-015a-483d-a39f-a8ad6d3c77c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 03:11:11.187218  289321 system_pods.go:61] "etcd-test-preload-500822" [743d9d03-7399-47f3-86f5-f05f2beb9081] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 03:11:11.187230  289321 system_pods.go:61] "kube-apiserver-test-preload-500822" [1449acb1-66cc-4bfc-adcf-052d0a15d48b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 03:11:11.187236  289321 system_pods.go:61] "kube-proxy-hc6dw" [8707fd2e-04a4-43af-a006-b0e095a07219] Running
	I1209 03:11:11.187246  289321 system_pods.go:61] "storage-provisioner" [5a2b67d2-ed64-4802-a716-91a73b8a5c7a] Running
	I1209 03:11:11.187259  289321 system_pods.go:74] duration metric: took 6.203085ms to wait for pod list to return data ...
	I1209 03:11:11.187276  289321 node_conditions.go:102] verifying NodePressure condition ...
	I1209 03:11:11.196566  289321 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 03:11:11.196594  289321 node_conditions.go:123] node cpu capacity is 2
	I1209 03:11:11.196608  289321 node_conditions.go:105] duration metric: took 9.327094ms to run NodePressure ...
	I1209 03:11:11.196662  289321 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:11:11.481046  289321 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1209 03:11:11.485380  289321 kubeadm.go:744] kubelet initialised
	I1209 03:11:11.485403  289321 kubeadm.go:745] duration metric: took 4.326657ms waiting for restarted kubelet to initialise ...
	I1209 03:11:11.485425  289321 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 03:11:11.504193  289321 ops.go:34] apiserver oom_adj: -16
	I1209 03:11:11.504225  289321 kubeadm.go:602] duration metric: took 8.016094414s to restartPrimaryControlPlane
	I1209 03:11:11.504240  289321 kubeadm.go:403] duration metric: took 8.096526565s to StartCluster
	I1209 03:11:11.504266  289321 settings.go:142] acquiring lock: {Name:mkec34d0133156567c6c6050ab2f8de3f197c63b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:11:11.504361  289321 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 03:11:11.505041  289321 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/kubeconfig: {Name:mkaafbe94dbea876978b17d37022d815642e1aad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:11:11.505291  289321 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 03:11:11.505426  289321 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 03:11:11.505535  289321 addons.go:70] Setting storage-provisioner=true in profile "test-preload-500822"
	I1209 03:11:11.505545  289321 config.go:182] Loaded profile config "test-preload-500822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 03:11:11.505558  289321 addons.go:239] Setting addon storage-provisioner=true in "test-preload-500822"
	W1209 03:11:11.505569  289321 addons.go:248] addon storage-provisioner should already be in state true
	I1209 03:11:11.505598  289321 host.go:66] Checking if "test-preload-500822" exists ...
	I1209 03:11:11.505607  289321 addons.go:70] Setting default-storageclass=true in profile "test-preload-500822"
	I1209 03:11:11.505636  289321 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-500822"
	I1209 03:11:11.506849  289321 out.go:179] * Verifying Kubernetes components...
	I1209 03:11:11.507992  289321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:11:11.507999  289321 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:11:11.508164  289321 kapi.go:59] client config for test-preload-500822: &rest.Config{Host:"https://192.168.39.162:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-254936/.minikube/profiles/test-preload-500822/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-254936/.minikube/profiles/test-preload-500822/client.key", CAFile:"/home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28162e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 03:11:11.508518  289321 addons.go:239] Setting addon default-storageclass=true in "test-preload-500822"
	W1209 03:11:11.508537  289321 addons.go:248] addon default-storageclass should already be in state true
	I1209 03:11:11.508563  289321 host.go:66] Checking if "test-preload-500822" exists ...
	I1209 03:11:11.509242  289321 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 03:11:11.509261  289321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 03:11:11.510436  289321 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 03:11:11.510458  289321 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 03:11:11.512310  289321 main.go:143] libmachine: domain test-preload-500822 has defined MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:11:11.512791  289321 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:5c:c0", ip: ""} in network mk-test-preload-500822: {Iface:virbr1 ExpiryTime:2025-12-09 04:10:52 +0000 UTC Type:0 Mac:52:54:00:66:5c:c0 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:test-preload-500822 Clientid:01:52:54:00:66:5c:c0}
	I1209 03:11:11.512835  289321 main.go:143] libmachine: domain test-preload-500822 has defined IP address 192.168.39.162 and MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:11:11.513003  289321 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/test-preload-500822/id_rsa Username:docker}
	I1209 03:11:11.513429  289321 main.go:143] libmachine: domain test-preload-500822 has defined MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:11:11.513840  289321 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:5c:c0", ip: ""} in network mk-test-preload-500822: {Iface:virbr1 ExpiryTime:2025-12-09 04:10:52 +0000 UTC Type:0 Mac:52:54:00:66:5c:c0 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:test-preload-500822 Clientid:01:52:54:00:66:5c:c0}
	I1209 03:11:11.513865  289321 main.go:143] libmachine: domain test-preload-500822 has defined IP address 192.168.39.162 and MAC address 52:54:00:66:5c:c0 in network mk-test-preload-500822
	I1209 03:11:11.513997  289321 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/test-preload-500822/id_rsa Username:docker}
	I1209 03:11:11.704146  289321 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 03:11:11.725507  289321 node_ready.go:35] waiting up to 6m0s for node "test-preload-500822" to be "Ready" ...
	I1209 03:11:11.728183  289321 node_ready.go:49] node "test-preload-500822" is "Ready"
	I1209 03:11:11.728211  289321 node_ready.go:38] duration metric: took 2.648258ms for node "test-preload-500822" to be "Ready" ...
	I1209 03:11:11.728224  289321 api_server.go:52] waiting for apiserver process to appear ...
	I1209 03:11:11.728271  289321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:11:11.750405  289321 api_server.go:72] duration metric: took 245.074562ms to wait for apiserver process to appear ...
	I1209 03:11:11.750446  289321 api_server.go:88] waiting for apiserver healthz status ...
	I1209 03:11:11.750475  289321 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1209 03:11:11.756931  289321 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I1209 03:11:11.757897  289321 api_server.go:141] control plane version: v1.34.2
	I1209 03:11:11.757935  289321 api_server.go:131] duration metric: took 7.480281ms to wait for apiserver health ...
	I1209 03:11:11.757945  289321 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 03:11:11.761498  289321 system_pods.go:59] 5 kube-system pods found
	I1209 03:11:11.761528  289321 system_pods.go:61] "coredns-66bc5c9577-d99xm" [c374bc36-015a-483d-a39f-a8ad6d3c77c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 03:11:11.761535  289321 system_pods.go:61] "etcd-test-preload-500822" [743d9d03-7399-47f3-86f5-f05f2beb9081] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 03:11:11.761543  289321 system_pods.go:61] "kube-apiserver-test-preload-500822" [1449acb1-66cc-4bfc-adcf-052d0a15d48b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 03:11:11.761548  289321 system_pods.go:61] "kube-proxy-hc6dw" [8707fd2e-04a4-43af-a006-b0e095a07219] Running
	I1209 03:11:11.761552  289321 system_pods.go:61] "storage-provisioner" [5a2b67d2-ed64-4802-a716-91a73b8a5c7a] Running
	I1209 03:11:11.761559  289321 system_pods.go:74] duration metric: took 3.607096ms to wait for pod list to return data ...
	I1209 03:11:11.761567  289321 default_sa.go:34] waiting for default service account to be created ...
	I1209 03:11:11.764473  289321 default_sa.go:45] found service account: "default"
	I1209 03:11:11.764500  289321 default_sa.go:55] duration metric: took 2.923743ms for default service account to be created ...
	I1209 03:11:11.764509  289321 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 03:11:11.767572  289321 system_pods.go:86] 5 kube-system pods found
	I1209 03:11:11.767615  289321 system_pods.go:89] "coredns-66bc5c9577-d99xm" [c374bc36-015a-483d-a39f-a8ad6d3c77c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 03:11:11.767627  289321 system_pods.go:89] "etcd-test-preload-500822" [743d9d03-7399-47f3-86f5-f05f2beb9081] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 03:11:11.767647  289321 system_pods.go:89] "kube-apiserver-test-preload-500822" [1449acb1-66cc-4bfc-adcf-052d0a15d48b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 03:11:11.767653  289321 system_pods.go:89] "kube-proxy-hc6dw" [8707fd2e-04a4-43af-a006-b0e095a07219] Running
	I1209 03:11:11.767660  289321 system_pods.go:89] "storage-provisioner" [5a2b67d2-ed64-4802-a716-91a73b8a5c7a] Running
	I1209 03:11:11.767718  289321 retry.go:31] will retry after 283.517224ms: missing components: kube-controller-manager, kube-scheduler
	I1209 03:11:11.812569  289321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 03:11:11.812914  289321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 03:11:12.056358  289321 system_pods.go:86] 5 kube-system pods found
	I1209 03:11:12.056394  289321 system_pods.go:89] "coredns-66bc5c9577-d99xm" [c374bc36-015a-483d-a39f-a8ad6d3c77c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 03:11:12.056402  289321 system_pods.go:89] "etcd-test-preload-500822" [743d9d03-7399-47f3-86f5-f05f2beb9081] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 03:11:12.056410  289321 system_pods.go:89] "kube-apiserver-test-preload-500822" [1449acb1-66cc-4bfc-adcf-052d0a15d48b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 03:11:12.056413  289321 system_pods.go:89] "kube-proxy-hc6dw" [8707fd2e-04a4-43af-a006-b0e095a07219] Running
	I1209 03:11:12.056418  289321 system_pods.go:89] "storage-provisioner" [5a2b67d2-ed64-4802-a716-91a73b8a5c7a] Running
	I1209 03:11:12.056434  289321 retry.go:31] will retry after 263.551497ms: missing components: kube-controller-manager, kube-scheduler
	I1209 03:11:12.325433  289321 system_pods.go:86] 5 kube-system pods found
	I1209 03:11:12.325487  289321 system_pods.go:89] "coredns-66bc5c9577-d99xm" [c374bc36-015a-483d-a39f-a8ad6d3c77c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 03:11:12.325502  289321 system_pods.go:89] "etcd-test-preload-500822" [743d9d03-7399-47f3-86f5-f05f2beb9081] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 03:11:12.325513  289321 system_pods.go:89] "kube-apiserver-test-preload-500822" [1449acb1-66cc-4bfc-adcf-052d0a15d48b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 03:11:12.325518  289321 system_pods.go:89] "kube-proxy-hc6dw" [8707fd2e-04a4-43af-a006-b0e095a07219] Running
	I1209 03:11:12.325523  289321 system_pods.go:89] "storage-provisioner" [5a2b67d2-ed64-4802-a716-91a73b8a5c7a] Running
	I1209 03:11:12.325546  289321 retry.go:31] will retry after 363.264368ms: missing components: kube-controller-manager, kube-scheduler
	I1209 03:11:12.505641  289321 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1209 03:11:12.506978  289321 addons.go:530] duration metric: took 1.001553777s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1209 03:11:12.692455  289321 system_pods.go:86] 5 kube-system pods found
	I1209 03:11:12.692497  289321 system_pods.go:89] "coredns-66bc5c9577-d99xm" [c374bc36-015a-483d-a39f-a8ad6d3c77c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 03:11:12.692506  289321 system_pods.go:89] "etcd-test-preload-500822" [743d9d03-7399-47f3-86f5-f05f2beb9081] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 03:11:12.692528  289321 system_pods.go:89] "kube-apiserver-test-preload-500822" [1449acb1-66cc-4bfc-adcf-052d0a15d48b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 03:11:12.692535  289321 system_pods.go:89] "kube-proxy-hc6dw" [8707fd2e-04a4-43af-a006-b0e095a07219] Running
	I1209 03:11:12.692546  289321 system_pods.go:89] "storage-provisioner" [5a2b67d2-ed64-4802-a716-91a73b8a5c7a] Running
	I1209 03:11:12.692570  289321 retry.go:31] will retry after 370.569754ms: missing components: kube-controller-manager, kube-scheduler
	I1209 03:11:13.067742  289321 system_pods.go:86] 5 kube-system pods found
	I1209 03:11:13.067787  289321 system_pods.go:89] "coredns-66bc5c9577-d99xm" [c374bc36-015a-483d-a39f-a8ad6d3c77c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 03:11:13.067799  289321 system_pods.go:89] "etcd-test-preload-500822" [743d9d03-7399-47f3-86f5-f05f2beb9081] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 03:11:13.067809  289321 system_pods.go:89] "kube-apiserver-test-preload-500822" [1449acb1-66cc-4bfc-adcf-052d0a15d48b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 03:11:13.067816  289321 system_pods.go:89] "kube-proxy-hc6dw" [8707fd2e-04a4-43af-a006-b0e095a07219] Running
	I1209 03:11:13.067836  289321 system_pods.go:89] "storage-provisioner" [5a2b67d2-ed64-4802-a716-91a73b8a5c7a] Running
	I1209 03:11:13.067856  289321 retry.go:31] will retry after 618.577374ms: missing components: kube-controller-manager, kube-scheduler
	I1209 03:11:13.692222  289321 system_pods.go:86] 5 kube-system pods found
	I1209 03:11:13.692263  289321 system_pods.go:89] "coredns-66bc5c9577-d99xm" [c374bc36-015a-483d-a39f-a8ad6d3c77c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 03:11:13.692272  289321 system_pods.go:89] "etcd-test-preload-500822" [743d9d03-7399-47f3-86f5-f05f2beb9081] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 03:11:13.692281  289321 system_pods.go:89] "kube-apiserver-test-preload-500822" [1449acb1-66cc-4bfc-adcf-052d0a15d48b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 03:11:13.692286  289321 system_pods.go:89] "kube-proxy-hc6dw" [8707fd2e-04a4-43af-a006-b0e095a07219] Running
	I1209 03:11:13.692290  289321 system_pods.go:89] "storage-provisioner" [5a2b67d2-ed64-4802-a716-91a73b8a5c7a] Running
	I1209 03:11:13.692309  289321 retry.go:31] will retry after 837.109563ms: missing components: kube-controller-manager, kube-scheduler
	I1209 03:11:14.534373  289321 system_pods.go:86] 5 kube-system pods found
	I1209 03:11:14.534418  289321 system_pods.go:89] "coredns-66bc5c9577-d99xm" [c374bc36-015a-483d-a39f-a8ad6d3c77c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 03:11:14.534428  289321 system_pods.go:89] "etcd-test-preload-500822" [743d9d03-7399-47f3-86f5-f05f2beb9081] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 03:11:14.534437  289321 system_pods.go:89] "kube-apiserver-test-preload-500822" [1449acb1-66cc-4bfc-adcf-052d0a15d48b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 03:11:14.534441  289321 system_pods.go:89] "kube-proxy-hc6dw" [8707fd2e-04a4-43af-a006-b0e095a07219] Running
	I1209 03:11:14.534445  289321 system_pods.go:89] "storage-provisioner" [5a2b67d2-ed64-4802-a716-91a73b8a5c7a] Running
	I1209 03:11:14.534463  289321 retry.go:31] will retry after 1.091993786s: missing components: kube-controller-manager, kube-scheduler
	I1209 03:11:15.631554  289321 system_pods.go:86] 5 kube-system pods found
	I1209 03:11:15.631596  289321 system_pods.go:89] "coredns-66bc5c9577-d99xm" [c374bc36-015a-483d-a39f-a8ad6d3c77c3] Running
	I1209 03:11:15.631609  289321 system_pods.go:89] "etcd-test-preload-500822" [743d9d03-7399-47f3-86f5-f05f2beb9081] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 03:11:15.631617  289321 system_pods.go:89] "kube-apiserver-test-preload-500822" [1449acb1-66cc-4bfc-adcf-052d0a15d48b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 03:11:15.631625  289321 system_pods.go:89] "kube-proxy-hc6dw" [8707fd2e-04a4-43af-a006-b0e095a07219] Running
	I1209 03:11:15.631631  289321 system_pods.go:89] "storage-provisioner" [5a2b67d2-ed64-4802-a716-91a73b8a5c7a] Running
	I1209 03:11:15.631653  289321 retry.go:31] will retry after 902.867474ms: missing components: kube-controller-manager, kube-scheduler
	I1209 03:11:16.538603  289321 system_pods.go:86] 5 kube-system pods found
	I1209 03:11:16.538640  289321 system_pods.go:89] "coredns-66bc5c9577-d99xm" [c374bc36-015a-483d-a39f-a8ad6d3c77c3] Running
	I1209 03:11:16.538653  289321 system_pods.go:89] "etcd-test-preload-500822" [743d9d03-7399-47f3-86f5-f05f2beb9081] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 03:11:16.538663  289321 system_pods.go:89] "kube-apiserver-test-preload-500822" [1449acb1-66cc-4bfc-adcf-052d0a15d48b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 03:11:16.538669  289321 system_pods.go:89] "kube-proxy-hc6dw" [8707fd2e-04a4-43af-a006-b0e095a07219] Running
	I1209 03:11:16.538678  289321 system_pods.go:89] "storage-provisioner" [5a2b67d2-ed64-4802-a716-91a73b8a5c7a] Running
	I1209 03:11:16.538700  289321 retry.go:31] will retry after 1.280647977s: missing components: kube-controller-manager, kube-scheduler
	I1209 03:11:17.823954  289321 system_pods.go:86] 5 kube-system pods found
	I1209 03:11:17.823989  289321 system_pods.go:89] "coredns-66bc5c9577-d99xm" [c374bc36-015a-483d-a39f-a8ad6d3c77c3] Running
	I1209 03:11:17.823999  289321 system_pods.go:89] "etcd-test-preload-500822" [743d9d03-7399-47f3-86f5-f05f2beb9081] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 03:11:17.824009  289321 system_pods.go:89] "kube-apiserver-test-preload-500822" [1449acb1-66cc-4bfc-adcf-052d0a15d48b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 03:11:17.824015  289321 system_pods.go:89] "kube-proxy-hc6dw" [8707fd2e-04a4-43af-a006-b0e095a07219] Running
	I1209 03:11:17.824019  289321 system_pods.go:89] "storage-provisioner" [5a2b67d2-ed64-4802-a716-91a73b8a5c7a] Running
	I1209 03:11:17.824039  289321 retry.go:31] will retry after 1.644621862s: missing components: kube-controller-manager, kube-scheduler
	I1209 03:11:19.473559  289321 system_pods.go:86] 5 kube-system pods found
	I1209 03:11:19.473599  289321 system_pods.go:89] "coredns-66bc5c9577-d99xm" [c374bc36-015a-483d-a39f-a8ad6d3c77c3] Running
	I1209 03:11:19.473619  289321 system_pods.go:89] "etcd-test-preload-500822" [743d9d03-7399-47f3-86f5-f05f2beb9081] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 03:11:19.473629  289321 system_pods.go:89] "kube-apiserver-test-preload-500822" [1449acb1-66cc-4bfc-adcf-052d0a15d48b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 03:11:19.473637  289321 system_pods.go:89] "kube-proxy-hc6dw" [8707fd2e-04a4-43af-a006-b0e095a07219] Running
	I1209 03:11:19.473642  289321 system_pods.go:89] "storage-provisioner" [5a2b67d2-ed64-4802-a716-91a73b8a5c7a] Running
	I1209 03:11:19.473663  289321 retry.go:31] will retry after 2.255900956s: missing components: kube-controller-manager, kube-scheduler
	I1209 03:11:21.734426  289321 system_pods.go:86] 5 kube-system pods found
	I1209 03:11:21.734464  289321 system_pods.go:89] "coredns-66bc5c9577-d99xm" [c374bc36-015a-483d-a39f-a8ad6d3c77c3] Running
	I1209 03:11:21.734478  289321 system_pods.go:89] "etcd-test-preload-500822" [743d9d03-7399-47f3-86f5-f05f2beb9081] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 03:11:21.734485  289321 system_pods.go:89] "kube-apiserver-test-preload-500822" [1449acb1-66cc-4bfc-adcf-052d0a15d48b] Running
	I1209 03:11:21.734492  289321 system_pods.go:89] "kube-proxy-hc6dw" [8707fd2e-04a4-43af-a006-b0e095a07219] Running
	I1209 03:11:21.734497  289321 system_pods.go:89] "storage-provisioner" [5a2b67d2-ed64-4802-a716-91a73b8a5c7a] Running
	I1209 03:11:21.734516  289321 retry.go:31] will retry after 2.624752353s: missing components: kube-controller-manager, kube-scheduler
	I1209 03:11:24.363322  289321 system_pods.go:86] 5 kube-system pods found
	I1209 03:11:24.363358  289321 system_pods.go:89] "coredns-66bc5c9577-d99xm" [c374bc36-015a-483d-a39f-a8ad6d3c77c3] Running
	I1209 03:11:24.363368  289321 system_pods.go:89] "etcd-test-preload-500822" [743d9d03-7399-47f3-86f5-f05f2beb9081] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 03:11:24.363373  289321 system_pods.go:89] "kube-apiserver-test-preload-500822" [1449acb1-66cc-4bfc-adcf-052d0a15d48b] Running
	I1209 03:11:24.363378  289321 system_pods.go:89] "kube-proxy-hc6dw" [8707fd2e-04a4-43af-a006-b0e095a07219] Running
	I1209 03:11:24.363382  289321 system_pods.go:89] "storage-provisioner" [5a2b67d2-ed64-4802-a716-91a73b8a5c7a] Running
	I1209 03:11:24.363399  289321 retry.go:31] will retry after 3.557616299s: missing components: kube-controller-manager, kube-scheduler
	I1209 03:11:27.927799  289321 system_pods.go:86] 7 kube-system pods found
	I1209 03:11:27.927847  289321 system_pods.go:89] "coredns-66bc5c9577-d99xm" [c374bc36-015a-483d-a39f-a8ad6d3c77c3] Running
	I1209 03:11:27.927853  289321 system_pods.go:89] "etcd-test-preload-500822" [743d9d03-7399-47f3-86f5-f05f2beb9081] Running
	I1209 03:11:27.927857  289321 system_pods.go:89] "kube-apiserver-test-preload-500822" [1449acb1-66cc-4bfc-adcf-052d0a15d48b] Running
	I1209 03:11:27.927864  289321 system_pods.go:89] "kube-controller-manager-test-preload-500822" [3efe875a-6c97-4132-b1e0-365ea9625973] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 03:11:27.927869  289321 system_pods.go:89] "kube-proxy-hc6dw" [8707fd2e-04a4-43af-a006-b0e095a07219] Running
	I1209 03:11:27.927876  289321 system_pods.go:89] "kube-scheduler-test-preload-500822" [602f4e6f-b10d-4158-84b2-ea917dd59a95] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 03:11:27.927880  289321 system_pods.go:89] "storage-provisioner" [5a2b67d2-ed64-4802-a716-91a73b8a5c7a] Running
	I1209 03:11:27.927889  289321 system_pods.go:126] duration metric: took 16.163374657s to wait for k8s-apps to be running ...
	I1209 03:11:27.927897  289321 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 03:11:27.927961  289321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 03:11:27.950557  289321 system_svc.go:56] duration metric: took 22.649219ms WaitForService to wait for kubelet
	I1209 03:11:27.950593  289321 kubeadm.go:587] duration metric: took 16.445271477s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:11:27.950611  289321 node_conditions.go:102] verifying NodePressure condition ...
	I1209 03:11:27.954194  289321 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 03:11:27.954218  289321 node_conditions.go:123] node cpu capacity is 2
	I1209 03:11:27.954229  289321 node_conditions.go:105] duration metric: took 3.614585ms to run NodePressure ...
	I1209 03:11:27.954242  289321 start.go:242] waiting for startup goroutines ...
	I1209 03:11:27.954249  289321 start.go:247] waiting for cluster config update ...
	I1209 03:11:27.954261  289321 start.go:256] writing updated cluster config ...
	I1209 03:11:27.954579  289321 ssh_runner.go:195] Run: rm -f paused
	I1209 03:11:27.960472  289321 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 03:11:27.961029  289321 kapi.go:59] client config for test-preload-500822: &rest.Config{Host:"https://192.168.39.162:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-254936/.minikube/profiles/test-preload-500822/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-254936/.minikube/profiles/test-preload-500822/client.key", CAFile:"/home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28162e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 03:11:27.966218  289321 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d99xm" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:11:27.971366  289321 pod_ready.go:94] pod "coredns-66bc5c9577-d99xm" is "Ready"
	I1209 03:11:27.971392  289321 pod_ready.go:86] duration metric: took 5.149745ms for pod "coredns-66bc5c9577-d99xm" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:11:27.974950  289321 pod_ready.go:83] waiting for pod "etcd-test-preload-500822" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:11:27.984746  289321 pod_ready.go:94] pod "etcd-test-preload-500822" is "Ready"
	I1209 03:11:27.984778  289321 pod_ready.go:86] duration metric: took 9.799487ms for pod "etcd-test-preload-500822" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:11:27.988874  289321 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-500822" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:11:27.997408  289321 pod_ready.go:94] pod "kube-apiserver-test-preload-500822" is "Ready"
	I1209 03:11:27.997434  289321 pod_ready.go:86] duration metric: took 8.534201ms for pod "kube-apiserver-test-preload-500822" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:11:28.000746  289321 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-500822" in "kube-system" namespace to be "Ready" or be gone ...
	W1209 03:11:30.010884  289321 pod_ready.go:104] pod "kube-controller-manager-test-preload-500822" is not "Ready", error: <nil>
	W1209 03:11:32.507391  289321 pod_ready.go:104] pod "kube-controller-manager-test-preload-500822" is not "Ready", error: <nil>
	W1209 03:11:35.008146  289321 pod_ready.go:104] pod "kube-controller-manager-test-preload-500822" is not "Ready", error: <nil>
	I1209 03:11:37.508315  289321 pod_ready.go:94] pod "kube-controller-manager-test-preload-500822" is "Ready"
	I1209 03:11:37.508347  289321 pod_ready.go:86] duration metric: took 9.507579098s for pod "kube-controller-manager-test-preload-500822" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:11:37.510775  289321 pod_ready.go:83] waiting for pod "kube-proxy-hc6dw" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:11:37.516406  289321 pod_ready.go:94] pod "kube-proxy-hc6dw" is "Ready"
	I1209 03:11:37.516440  289321 pod_ready.go:86] duration metric: took 5.645249ms for pod "kube-proxy-hc6dw" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:11:37.518786  289321 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-500822" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:11:37.525083  289321 pod_ready.go:94] pod "kube-scheduler-test-preload-500822" is "Ready"
	I1209 03:11:37.525117  289321 pod_ready.go:86] duration metric: took 6.302806ms for pod "kube-scheduler-test-preload-500822" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:11:37.525135  289321 pod_ready.go:40] duration metric: took 9.564623583s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 03:11:37.571750  289321 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1209 03:11:37.573680  289321 out.go:179] * Done! kubectl is now configured to use "test-preload-500822" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.419330673Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765249898419306511,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86b9452b-ac07-4103-aa3f-30180bd08d36 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.420909183Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02f1e00c-b087-4dbe-aed3-b99a18cb0f3a name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.420970520Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02f1e00c-b087-4dbe-aed3-b99a18cb0f3a name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.421706138Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:413198ceb1c9b88baf52625adf9d19a682c8c2e4218aabcef8f5974e97d0a154,PodSandboxId:42474dcbcdb3ed209925f4d419ff3b954b6e9673e7ff49359131c762dbc22fcc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765249886437811723,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-500822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b8913b9e86312d5683fec650338018e,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbe908db0a9d310704b8d38884373ae224a67ea20b0f0f6bc89980c7aaf91ab,PodSandboxId:1ff6b622ba95cd90519797f968dcc118f48aaa1e27d48d17d5bb431755b52b6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765249886427709498,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-500822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c33d252b1721a7fd83
759abb4f6d6c,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9a919aa697c06cfec7a57e3ae3a5f70d7db4f1c9d282809ced4e4fb2b9db0f,PodSandboxId:547b51d122b82da20193856d457b336e981655f56d043c4b3103a76b5cca20b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765249874131317832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-d9
9xm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c374bc36-015a-483d-a39f-a8ad6d3c77c3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:558c2b538ba4e3c241b57c5107c4987ea749e0efab9ae055a6d825765018eb26,PodSandboxId:d2ab9fbbf99ce28e9c62f2e026c4892ff4913bb2f54a61ba5405854d55c5440e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5b
eb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765249870410128435,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hc6dw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8707fd2e-04a4-43af-a006-b0e095a07219,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4c4905e19bc165d54c1af250a9c4818a5b6a4384cbe29474cf68f91d97a79d4,PodSandboxId:74173fa06f726c07368573450905f40fd0145304520cc443e146026a124988a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709
a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765249870445524040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2b67d2-ed64-4802-a716-91a73b8a5c7a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad8ae8d00bc6e371ff96964b92f8c54adbbfe0cbf3b05c29c4d61e3a5acf3ea7,PodSandboxId:55c0ffb7013008678536526e6059107044fe191f049a1598bd11cf4c06f46383,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765249867306815301,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-500822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7475b4f0e2f89082ed5b3e62025ceb5e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:385b9e5a93437d09780ad1ce7473e410df0ca490c6385c7d2e71bb146be674b3,PodSandboxId:de0df51e272926913b5b78a72db332d882bf3e32fb7611078f77166c89413d8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765249867302064413,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-500822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9709878124410761f5de7b63f14f563,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02f1e00c-b087-4dbe-aed3-b99a18cb0f3a name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.459584767Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98847d0f-1dc3-4cfd-afbb-652f0f9feed9 name=/runtime.v1.RuntimeService/Version
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.459707450Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98847d0f-1dc3-4cfd-afbb-652f0f9feed9 name=/runtime.v1.RuntimeService/Version
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.461524517Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95c62a5a-8277-4656-a6ec-e292115890c4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.461978635Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765249898461954704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95c62a5a-8277-4656-a6ec-e292115890c4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.462932114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=164300c0-ffe6-48d4-ba5b-bfcf34af8d22 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.463248716Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=164300c0-ffe6-48d4-ba5b-bfcf34af8d22 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.463723765Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:413198ceb1c9b88baf52625adf9d19a682c8c2e4218aabcef8f5974e97d0a154,PodSandboxId:42474dcbcdb3ed209925f4d419ff3b954b6e9673e7ff49359131c762dbc22fcc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765249886437811723,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-500822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b8913b9e86312d5683fec650338018e,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbe908db0a9d310704b8d38884373ae224a67ea20b0f0f6bc89980c7aaf91ab,PodSandboxId:1ff6b622ba95cd90519797f968dcc118f48aaa1e27d48d17d5bb431755b52b6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765249886427709498,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-500822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c33d252b1721a7fd83
759abb4f6d6c,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9a919aa697c06cfec7a57e3ae3a5f70d7db4f1c9d282809ced4e4fb2b9db0f,PodSandboxId:547b51d122b82da20193856d457b336e981655f56d043c4b3103a76b5cca20b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765249874131317832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-d9
9xm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c374bc36-015a-483d-a39f-a8ad6d3c77c3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:558c2b538ba4e3c241b57c5107c4987ea749e0efab9ae055a6d825765018eb26,PodSandboxId:d2ab9fbbf99ce28e9c62f2e026c4892ff4913bb2f54a61ba5405854d55c5440e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5b
eb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765249870410128435,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hc6dw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8707fd2e-04a4-43af-a006-b0e095a07219,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4c4905e19bc165d54c1af250a9c4818a5b6a4384cbe29474cf68f91d97a79d4,PodSandboxId:74173fa06f726c07368573450905f40fd0145304520cc443e146026a124988a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709
a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765249870445524040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2b67d2-ed64-4802-a716-91a73b8a5c7a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad8ae8d00bc6e371ff96964b92f8c54adbbfe0cbf3b05c29c4d61e3a5acf3ea7,PodSandboxId:55c0ffb7013008678536526e6059107044fe191f049a1598bd11cf4c06f46383,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765249867306815301,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-500822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7475b4f0e2f89082ed5b3e62025ceb5e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:385b9e5a93437d09780ad1ce7473e410df0ca490c6385c7d2e71bb146be674b3,PodSandboxId:de0df51e272926913b5b78a72db332d882bf3e32fb7611078f77166c89413d8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765249867302064413,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-500822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9709878124410761f5de7b63f14f563,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=164300c0-ffe6-48d4-ba5b-bfcf34af8d22 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.504715761Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2e5a8e61-0070-4202-9d64-b35c1267f099 name=/runtime.v1.RuntimeService/Version
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.504816315Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2e5a8e61-0070-4202-9d64-b35c1267f099 name=/runtime.v1.RuntimeService/Version
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.506263029Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e9954137-40f6-4ee3-a553-6a52ebbae282 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.506984438Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765249898506960093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e9954137-40f6-4ee3-a553-6a52ebbae282 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.508439691Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=449bf93c-0fd2-498a-bcdf-6ec6af0d8694 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.508545797Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=449bf93c-0fd2-498a-bcdf-6ec6af0d8694 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.508745226Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:413198ceb1c9b88baf52625adf9d19a682c8c2e4218aabcef8f5974e97d0a154,PodSandboxId:42474dcbcdb3ed209925f4d419ff3b954b6e9673e7ff49359131c762dbc22fcc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765249886437811723,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-500822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b8913b9e86312d5683fec650338018e,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbe908db0a9d310704b8d38884373ae224a67ea20b0f0f6bc89980c7aaf91ab,PodSandboxId:1ff6b622ba95cd90519797f968dcc118f48aaa1e27d48d17d5bb431755b52b6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765249886427709498,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-500822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c33d252b1721a7fd83
759abb4f6d6c,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9a919aa697c06cfec7a57e3ae3a5f70d7db4f1c9d282809ced4e4fb2b9db0f,PodSandboxId:547b51d122b82da20193856d457b336e981655f56d043c4b3103a76b5cca20b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765249874131317832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-d9
9xm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c374bc36-015a-483d-a39f-a8ad6d3c77c3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:558c2b538ba4e3c241b57c5107c4987ea749e0efab9ae055a6d825765018eb26,PodSandboxId:d2ab9fbbf99ce28e9c62f2e026c4892ff4913bb2f54a61ba5405854d55c5440e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5b
eb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765249870410128435,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hc6dw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8707fd2e-04a4-43af-a006-b0e095a07219,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4c4905e19bc165d54c1af250a9c4818a5b6a4384cbe29474cf68f91d97a79d4,PodSandboxId:74173fa06f726c07368573450905f40fd0145304520cc443e146026a124988a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709
a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765249870445524040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2b67d2-ed64-4802-a716-91a73b8a5c7a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad8ae8d00bc6e371ff96964b92f8c54adbbfe0cbf3b05c29c4d61e3a5acf3ea7,PodSandboxId:55c0ffb7013008678536526e6059107044fe191f049a1598bd11cf4c06f46383,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765249867306815301,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-500822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7475b4f0e2f89082ed5b3e62025ceb5e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:385b9e5a93437d09780ad1ce7473e410df0ca490c6385c7d2e71bb146be674b3,PodSandboxId:de0df51e272926913b5b78a72db332d882bf3e32fb7611078f77166c89413d8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765249867302064413,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-500822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9709878124410761f5de7b63f14f563,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=449bf93c-0fd2-498a-bcdf-6ec6af0d8694 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.542368748Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=62c2e7e7-2455-4c14-b0ce-7072fb0b5da8 name=/runtime.v1.RuntimeService/Version
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.542584187Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=62c2e7e7-2455-4c14-b0ce-7072fb0b5da8 name=/runtime.v1.RuntimeService/Version
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.545572252Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6e2645dc-9e62-46b0-b338-8e5c6454dbaf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.546390858Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765249898546358192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e2645dc-9e62-46b0-b338-8e5c6454dbaf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.547589818Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28c3c57e-1f57-4e86-8c64-7653b7443cc0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.547693843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28c3c57e-1f57-4e86-8c64-7653b7443cc0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:11:38 test-preload-500822 crio[845]: time="2025-12-09 03:11:38.547875573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:413198ceb1c9b88baf52625adf9d19a682c8c2e4218aabcef8f5974e97d0a154,PodSandboxId:42474dcbcdb3ed209925f4d419ff3b954b6e9673e7ff49359131c762dbc22fcc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765249886437811723,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-500822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b8913b9e86312d5683fec650338018e,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbe908db0a9d310704b8d38884373ae224a67ea20b0f0f6bc89980c7aaf91ab,PodSandboxId:1ff6b622ba95cd90519797f968dcc118f48aaa1e27d48d17d5bb431755b52b6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765249886427709498,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-500822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c33d252b1721a7fd83
759abb4f6d6c,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9a919aa697c06cfec7a57e3ae3a5f70d7db4f1c9d282809ced4e4fb2b9db0f,PodSandboxId:547b51d122b82da20193856d457b336e981655f56d043c4b3103a76b5cca20b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765249874131317832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-d9
9xm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c374bc36-015a-483d-a39f-a8ad6d3c77c3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:558c2b538ba4e3c241b57c5107c4987ea749e0efab9ae055a6d825765018eb26,PodSandboxId:d2ab9fbbf99ce28e9c62f2e026c4892ff4913bb2f54a61ba5405854d55c5440e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5b
eb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765249870410128435,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hc6dw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8707fd2e-04a4-43af-a006-b0e095a07219,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4c4905e19bc165d54c1af250a9c4818a5b6a4384cbe29474cf68f91d97a79d4,PodSandboxId:74173fa06f726c07368573450905f40fd0145304520cc443e146026a124988a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709
a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765249870445524040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a2b67d2-ed64-4802-a716-91a73b8a5c7a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad8ae8d00bc6e371ff96964b92f8c54adbbfe0cbf3b05c29c4d61e3a5acf3ea7,PodSandboxId:55c0ffb7013008678536526e6059107044fe191f049a1598bd11cf4c06f46383,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765249867306815301,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-500822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7475b4f0e2f89082ed5b3e62025ceb5e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:385b9e5a93437d09780ad1ce7473e410df0ca490c6385c7d2e71bb146be674b3,PodSandboxId:de0df51e272926913b5b78a72db332d882bf3e32fb7611078f77166c89413d8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765249867302064413,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-500822,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9709878124410761f5de7b63f14f563,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28c3c57e-1f57-4e86-8c64-7653b7443cc0 name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	413198ceb1c9b       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   12 seconds ago      Running             kube-controller-manager   1                   42474dcbcdb3e       kube-controller-manager-test-preload-500822   kube-system
	afbe908db0a9d       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   12 seconds ago      Running             kube-scheduler            1                   1ff6b622ba95c       kube-scheduler-test-preload-500822            kube-system
	6f9a919aa697c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   24 seconds ago      Running             coredns                   1                   547b51d122b82       coredns-66bc5c9577-d99xm                      kube-system
	b4c4905e19bc1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   28 seconds ago      Running             storage-provisioner       1                   74173fa06f726       storage-provisioner                           kube-system
	558c2b538ba4e       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   28 seconds ago      Running             kube-proxy                1                   d2ab9fbbf99ce       kube-proxy-hc6dw                              kube-system
	ad8ae8d00bc6e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   31 seconds ago      Running             etcd                      1                   55c0ffb701300       etcd-test-preload-500822                      kube-system
	385b9e5a93437       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   31 seconds ago      Running             kube-apiserver            1                   de0df51e27292       kube-apiserver-test-preload-500822            kube-system
	
	
	==> coredns [6f9a919aa697c06cfec7a57e3ae3a5f70d7db4f1c9d282809ced4e4fb2b9db0f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45176 - 54337 "HINFO IN 7219681378792155479.6873232713027298916. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.038600754s
	
	
	==> describe nodes <==
	Name:               test-preload-500822
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-500822
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=test-preload-500822
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T03_09_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 03:09:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-500822
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 03:11:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 03:11:11 +0000   Tue, 09 Dec 2025 03:09:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 03:11:11 +0000   Tue, 09 Dec 2025 03:09:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 03:11:11 +0000   Tue, 09 Dec 2025 03:09:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 03:11:11 +0000   Tue, 09 Dec 2025 03:11:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.162
	  Hostname:    test-preload-500822
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 a92d72f17dd44ccf831b6848b5682f7e
	  System UUID:                a92d72f1-7dd4-4ccf-831b-6848b5682f7e
	  Boot ID:                    ff52e0aa-0ea3-4a50-96e7-c136e3fad584
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-d99xm                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     110s
	  kube-system                 etcd-test-preload-500822                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         115s
	  kube-system                 kube-apiserver-test-preload-500822             250m (12%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-test-preload-500822    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 kube-proxy-hc6dw                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-test-preload-500822             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 108s                 kube-proxy       
	  Normal   Starting                 27s                  kube-proxy       
	  Normal   Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m1s (x8 over 2m2s)  kubelet          Node test-preload-500822 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m1s (x8 over 2m2s)  kubelet          Node test-preload-500822 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m1s (x7 over 2m2s)  kubelet          Node test-preload-500822 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    115s                 kubelet          Node test-preload-500822 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  115s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  115s                 kubelet          Node test-preload-500822 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     115s                 kubelet          Node test-preload-500822 status is now: NodeHasSufficientPID
	  Normal   Starting                 115s                 kubelet          Starting kubelet.
	  Normal   NodeReady                114s                 kubelet          Node test-preload-500822 status is now: NodeReady
	  Normal   RegisteredNode           111s                 node-controller  Node test-preload-500822 event: Registered Node test-preload-500822 in Controller
	  Normal   Starting                 33s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  32s (x8 over 32s)    kubelet          Node test-preload-500822 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    32s (x8 over 32s)    kubelet          Node test-preload-500822 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     32s (x7 over 32s)    kubelet          Node test-preload-500822 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  32s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 29s                  kubelet          Node test-preload-500822 has been rebooted, boot id: ff52e0aa-0ea3-4a50-96e7-c136e3fad584
	  Normal   RegisteredNode           9s                   node-controller  Node test-preload-500822 event: Registered Node test-preload-500822 in Controller
	
	
	==> dmesg <==
	[Dec 9 03:10] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000054] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007826] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.997135] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.086118] kauditd_printk_skb: 4 callbacks suppressed
	[Dec 9 03:11] kauditd_printk_skb: 102 callbacks suppressed
	[  +5.136523] kauditd_printk_skb: 168 callbacks suppressed
	[  +9.333270] kauditd_printk_skb: 125 callbacks suppressed
	[  +3.549819] kauditd_printk_skb: 51 callbacks suppressed
	
	
	==> etcd [ad8ae8d00bc6e371ff96964b92f8c54adbbfe0cbf3b05c29c4d61e3a5acf3ea7] <==
	{"level":"warn","ts":"2025-12-09T03:11:08.486679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.500378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.514921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.517839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.527563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.536905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.546041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.553827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.568286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.577363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.588848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.597419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.604787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.613679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.625080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.635325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.643041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.653107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.661858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.669408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.679076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.689862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.702093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.711510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:11:08.786347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52208","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:11:38 up 0 min,  0 users,  load average: 0.64, 0.20, 0.07
	Linux test-preload-500822 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [385b9e5a93437d09780ad1ce7473e410df0ca490c6385c7d2e71bb146be674b3] <==
	I1209 03:11:09.465265       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1209 03:11:09.469203       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1209 03:11:09.469276       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1209 03:11:09.476772       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 03:11:09.479176       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1209 03:11:09.480014       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1209 03:11:09.480443       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1209 03:11:09.480474       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1209 03:11:09.480563       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1209 03:11:09.480638       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1209 03:11:09.481333       1 aggregator.go:171] initial CRD sync complete...
	I1209 03:11:09.481380       1 autoregister_controller.go:144] Starting autoregister controller
	I1209 03:11:09.481388       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1209 03:11:09.481394       1 cache.go:39] Caches are synced for autoregister controller
	I1209 03:11:09.483081       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1209 03:11:09.494598       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1209 03:11:09.520219       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 03:11:10.090383       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 03:11:10.369981       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 03:11:11.328463       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 03:11:11.374880       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1209 03:11:11.409049       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 03:11:11.417055       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 03:11:29.892480       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 03:11:30.093965       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [413198ceb1c9b88baf52625adf9d19a682c8c2e4218aabcef8f5974e97d0a154] <==
	I1209 03:11:29.809862       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1209 03:11:29.817344       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 03:11:29.817375       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1209 03:11:29.817383       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1209 03:11:29.821396       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1209 03:11:29.827228       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1209 03:11:29.830577       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 03:11:29.840101       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1209 03:11:29.840180       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1209 03:11:29.840219       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1209 03:11:29.840229       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1209 03:11:29.840346       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1209 03:11:29.840365       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1209 03:11:29.840384       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1209 03:11:29.842199       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1209 03:11:29.842309       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1209 03:11:29.842389       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-500822"
	I1209 03:11:29.842457       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1209 03:11:29.847212       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1209 03:11:29.849510       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1209 03:11:29.852127       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1209 03:11:29.854989       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1209 03:11:29.857529       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1209 03:11:29.861058       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1209 03:11:29.868694       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	
	
	==> kube-proxy [558c2b538ba4e3c241b57c5107c4987ea749e0efab9ae055a6d825765018eb26] <==
	I1209 03:11:10.845430       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 03:11:10.946524       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 03:11:10.946564       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.162"]
	E1209 03:11:10.946642       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 03:11:10.987656       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1209 03:11:10.987708       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 03:11:10.987736       1 server_linux.go:132] "Using iptables Proxier"
	I1209 03:11:10.997541       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 03:11:10.997869       1 server.go:527] "Version info" version="v1.34.2"
	I1209 03:11:10.997882       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 03:11:11.003078       1 config.go:200] "Starting service config controller"
	I1209 03:11:11.003090       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 03:11:11.003104       1 config.go:106] "Starting endpoint slice config controller"
	I1209 03:11:11.003107       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 03:11:11.003117       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 03:11:11.003120       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 03:11:11.005205       1 config.go:309] "Starting node config controller"
	I1209 03:11:11.005233       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 03:11:11.005240       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 03:11:11.104132       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 03:11:11.104290       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 03:11:11.104300       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [afbe908db0a9d310704b8d38884373ae224a67ea20b0f0f6bc89980c7aaf91ab] <==
	I1209 03:11:27.246123       1 serving.go:386] Generated self-signed cert in-memory
	I1209 03:11:27.859672       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1209 03:11:27.859723       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 03:11:27.865907       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1209 03:11:27.866061       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1209 03:11:27.866102       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 03:11:27.866109       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 03:11:27.866118       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1209 03:11:27.866198       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1209 03:11:27.871817       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 03:11:27.872955       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 03:11:27.966499       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1209 03:11:27.966637       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 03:11:27.969347       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Dec 09 03:11:10 test-preload-500822 kubelet[1203]: I1209 03:11:10.078552    1203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8707fd2e-04a4-43af-a006-b0e095a07219-xtables-lock\") pod \"kube-proxy-hc6dw\" (UID: \"8707fd2e-04a4-43af-a006-b0e095a07219\") " pod="kube-system/kube-proxy-hc6dw"
	Dec 09 03:11:10 test-preload-500822 kubelet[1203]: I1209 03:11:10.078645    1203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8707fd2e-04a4-43af-a006-b0e095a07219-lib-modules\") pod \"kube-proxy-hc6dw\" (UID: \"8707fd2e-04a4-43af-a006-b0e095a07219\") " pod="kube-system/kube-proxy-hc6dw"
	Dec 09 03:11:10 test-preload-500822 kubelet[1203]: E1209 03:11:10.079087    1203 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 09 03:11:10 test-preload-500822 kubelet[1203]: E1209 03:11:10.079199    1203 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c374bc36-015a-483d-a39f-a8ad6d3c77c3-config-volume podName:c374bc36-015a-483d-a39f-a8ad6d3c77c3 nodeName:}" failed. No retries permitted until 2025-12-09 03:11:10.579176938 +0000 UTC m=+4.713544726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c374bc36-015a-483d-a39f-a8ad6d3c77c3-config-volume") pod "coredns-66bc5c9577-d99xm" (UID: "c374bc36-015a-483d-a39f-a8ad6d3c77c3") : object "kube-system"/"coredns" not registered
	Dec 09 03:11:10 test-preload-500822 kubelet[1203]: E1209 03:11:10.581427    1203 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 09 03:11:10 test-preload-500822 kubelet[1203]: E1209 03:11:10.581535    1203 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c374bc36-015a-483d-a39f-a8ad6d3c77c3-config-volume podName:c374bc36-015a-483d-a39f-a8ad6d3c77c3 nodeName:}" failed. No retries permitted until 2025-12-09 03:11:11.581513748 +0000 UTC m=+5.715881551 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c374bc36-015a-483d-a39f-a8ad6d3c77c3-config-volume") pod "coredns-66bc5c9577-d99xm" (UID: "c374bc36-015a-483d-a39f-a8ad6d3c77c3") : object "kube-system"/"coredns" not registered
	Dec 09 03:11:11 test-preload-500822 kubelet[1203]: I1209 03:11:11.575690    1203 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 09 03:11:11 test-preload-500822 kubelet[1203]: E1209 03:11:11.591240    1203 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 09 03:11:11 test-preload-500822 kubelet[1203]: E1209 03:11:11.591506    1203 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c374bc36-015a-483d-a39f-a8ad6d3c77c3-config-volume podName:c374bc36-015a-483d-a39f-a8ad6d3c77c3 nodeName:}" failed. No retries permitted until 2025-12-09 03:11:13.591421143 +0000 UTC m=+7.725788919 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c374bc36-015a-483d-a39f-a8ad6d3c77c3-config-volume") pod "coredns-66bc5c9577-d99xm" (UID: "c374bc36-015a-483d-a39f-a8ad6d3c77c3") : object "kube-system"/"coredns" not registered
	Dec 09 03:11:16 test-preload-500822 kubelet[1203]: E1209 03:11:16.046267    1203 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765249876043081815 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 09 03:11:16 test-preload-500822 kubelet[1203]: E1209 03:11:16.046316    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765249876043081815 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 09 03:11:25 test-preload-500822 kubelet[1203]: I1209 03:11:25.963337    1203 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-500822"
	Dec 09 03:11:25 test-preload-500822 kubelet[1203]: I1209 03:11:25.963744    1203 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-500822"
	Dec 09 03:11:26 test-preload-500822 kubelet[1203]: I1209 03:11:26.006014    1203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b8913b9e86312d5683fec650338018e-usr-share-ca-certificates\") pod \"kube-controller-manager-test-preload-500822\" (UID: \"4b8913b9e86312d5683fec650338018e\") " pod="kube-system/kube-controller-manager-test-preload-500822"
	Dec 09 03:11:26 test-preload-500822 kubelet[1203]: I1209 03:11:26.006267    1203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4b8913b9e86312d5683fec650338018e-flexvolume-dir\") pod \"kube-controller-manager-test-preload-500822\" (UID: \"4b8913b9e86312d5683fec650338018e\") " pod="kube-system/kube-controller-manager-test-preload-500822"
	Dec 09 03:11:26 test-preload-500822 kubelet[1203]: I1209 03:11:26.006331    1203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b8913b9e86312d5683fec650338018e-kubeconfig\") pod \"kube-controller-manager-test-preload-500822\" (UID: \"4b8913b9e86312d5683fec650338018e\") " pod="kube-system/kube-controller-manager-test-preload-500822"
	Dec 09 03:11:26 test-preload-500822 kubelet[1203]: I1209 03:11:26.006415    1203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68c33d252b1721a7fd83759abb4f6d6c-kubeconfig\") pod \"kube-scheduler-test-preload-500822\" (UID: \"68c33d252b1721a7fd83759abb4f6d6c\") " pod="kube-system/kube-scheduler-test-preload-500822"
	Dec 09 03:11:26 test-preload-500822 kubelet[1203]: I1209 03:11:26.006494    1203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b8913b9e86312d5683fec650338018e-ca-certs\") pod \"kube-controller-manager-test-preload-500822\" (UID: \"4b8913b9e86312d5683fec650338018e\") " pod="kube-system/kube-controller-manager-test-preload-500822"
	Dec 09 03:11:26 test-preload-500822 kubelet[1203]: I1209 03:11:26.006562    1203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b8913b9e86312d5683fec650338018e-k8s-certs\") pod \"kube-controller-manager-test-preload-500822\" (UID: \"4b8913b9e86312d5683fec650338018e\") " pod="kube-system/kube-controller-manager-test-preload-500822"
	Dec 09 03:11:26 test-preload-500822 kubelet[1203]: E1209 03:11:26.051818    1203 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765249886048977917 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 09 03:11:26 test-preload-500822 kubelet[1203]: E1209 03:11:26.051843    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765249886048977917 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 09 03:11:27 test-preload-500822 kubelet[1203]: I1209 03:11:27.200679    1203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-test-preload-500822" podStartSLOduration=2.200661575 podStartE2EDuration="2.200661575s" podCreationTimestamp="2025-12-09 03:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 03:11:27.177255553 +0000 UTC m=+21.311623349" watchObservedRunningTime="2025-12-09 03:11:27.200661575 +0000 UTC m=+21.335029372"
	Dec 09 03:11:36 test-preload-500822 kubelet[1203]: E1209 03:11:36.054330    1203 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765249896054021343 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 09 03:11:36 test-preload-500822 kubelet[1203]: E1209 03:11:36.054349    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765249896054021343 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 09 03:11:36 test-preload-500822 kubelet[1203]: I1209 03:11:36.307224    1203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-test-preload-500822" podStartSLOduration=11.307208255 podStartE2EDuration="11.307208255s" podCreationTimestamp="2025-12-09 03:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 03:11:27.202496402 +0000 UTC m=+21.336864198" watchObservedRunningTime="2025-12-09 03:11:36.307208255 +0000 UTC m=+30.441576051"
	
	
	==> storage-provisioner [b4c4905e19bc165d54c1af250a9c4818a5b6a4384cbe29474cf68f91d97a79d4] <==
	I1209 03:11:10.712971       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-500822 -n test-preload-500822
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-500822 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-500822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-500822
--- FAIL: TestPreload (166.21s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (60.55s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-739105 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-739105 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.166548771s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-739105] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-739105" primary control-plane node in "pause-739105" cluster
	* Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-739105" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:18:39.448493  294888 out.go:360] Setting OutFile to fd 1 ...
	I1209 03:18:39.448865  294888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:18:39.448877  294888 out.go:374] Setting ErrFile to fd 2...
	I1209 03:18:39.448884  294888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:18:39.449207  294888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 03:18:39.450008  294888 out.go:368] Setting JSON to false
	I1209 03:18:39.451326  294888 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":32469,"bootTime":1765217850,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 03:18:39.451420  294888 start.go:143] virtualization: kvm guest
	I1209 03:18:39.453936  294888 out.go:179] * [pause-739105] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 03:18:39.455660  294888 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 03:18:39.455778  294888 notify.go:221] Checking for updates...
	I1209 03:18:39.459935  294888 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:18:39.461637  294888 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 03:18:39.463094  294888 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	I1209 03:18:39.464627  294888 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 03:18:39.466006  294888 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:18:39.467987  294888 config.go:182] Loaded profile config "pause-739105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 03:18:39.468741  294888 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 03:18:39.515102  294888 out.go:179] * Using the kvm2 driver based on existing profile
	I1209 03:18:39.516443  294888 start.go:309] selected driver: kvm2
	I1209 03:18:39.516464  294888 start.go:927] validating driver "kvm2" against &{Name:pause-739105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-739105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.124 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:18:39.516647  294888 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:18:39.517845  294888 cni.go:84] Creating CNI manager for ""
	I1209 03:18:39.517936  294888 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 03:18:39.518023  294888 start.go:353] cluster config:
	{Name:pause-739105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-739105 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.124 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:18:39.518184  294888 iso.go:125] acquiring lock: {Name:mk5e3a22cdf6cd1ed24c9a04adaf1049140c04b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:18:39.520139  294888 out.go:179] * Starting "pause-739105" primary control-plane node in "pause-739105" cluster
	I1209 03:18:39.524997  294888 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 03:18:39.525048  294888 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 03:18:39.525097  294888 cache.go:65] Caching tarball of preloaded images
	I1209 03:18:39.525202  294888 preload.go:238] Found /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 03:18:39.525217  294888 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 03:18:39.525382  294888 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/pause-739105/config.json ...
	I1209 03:18:39.525609  294888 start.go:360] acquireMachinesLock for pause-739105: {Name:mkb4bf4bc2a6ad90b53de9be214957ca6809cd32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:18:46.508472  294888 start.go:364] duration metric: took 6.982784166s to acquireMachinesLock for "pause-739105"
	I1209 03:18:46.508543  294888 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:18:46.508551  294888 fix.go:54] fixHost starting: 
	I1209 03:18:46.511229  294888 fix.go:112] recreateIfNeeded on pause-739105: state=Running err=<nil>
	W1209 03:18:46.511266  294888 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:18:46.513638  294888 out.go:252] * Updating the running kvm2 "pause-739105" VM ...
	I1209 03:18:46.513673  294888 machine.go:94] provisionDockerMachine start ...
	I1209 03:18:46.518209  294888 main.go:143] libmachine: domain pause-739105 has defined MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:46.518765  294888 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:8f:9d", ip: ""} in network mk-pause-739105: {Iface:virbr4 ExpiryTime:2025-12-09 04:17:28 +0000 UTC Type:0 Mac:52:54:00:46:8f:9d Iaid: IPaddr:192.168.72.124 Prefix:24 Hostname:pause-739105 Clientid:01:52:54:00:46:8f:9d}
	I1209 03:18:46.518807  294888 main.go:143] libmachine: domain pause-739105 has defined IP address 192.168.72.124 and MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:46.519136  294888 main.go:143] libmachine: Using SSH client type: native
	I1209 03:18:46.519450  294888 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.72.124 22 <nil> <nil>}
	I1209 03:18:46.519471  294888 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 03:18:46.655009  294888 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-739105
	
	I1209 03:18:46.655057  294888 buildroot.go:166] provisioning hostname "pause-739105"
	I1209 03:18:46.659329  294888 main.go:143] libmachine: domain pause-739105 has defined MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:46.659975  294888 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:8f:9d", ip: ""} in network mk-pause-739105: {Iface:virbr4 ExpiryTime:2025-12-09 04:17:28 +0000 UTC Type:0 Mac:52:54:00:46:8f:9d Iaid: IPaddr:192.168.72.124 Prefix:24 Hostname:pause-739105 Clientid:01:52:54:00:46:8f:9d}
	I1209 03:18:46.660014  294888 main.go:143] libmachine: domain pause-739105 has defined IP address 192.168.72.124 and MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:46.660324  294888 main.go:143] libmachine: Using SSH client type: native
	I1209 03:18:46.660666  294888 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.72.124 22 <nil> <nil>}
	I1209 03:18:46.660690  294888 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-739105 && echo "pause-739105" | sudo tee /etc/hostname
	I1209 03:18:46.823899  294888 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-739105
	
	I1209 03:18:46.829613  294888 main.go:143] libmachine: domain pause-739105 has defined MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:46.830177  294888 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:8f:9d", ip: ""} in network mk-pause-739105: {Iface:virbr4 ExpiryTime:2025-12-09 04:17:28 +0000 UTC Type:0 Mac:52:54:00:46:8f:9d Iaid: IPaddr:192.168.72.124 Prefix:24 Hostname:pause-739105 Clientid:01:52:54:00:46:8f:9d}
	I1209 03:18:46.830253  294888 main.go:143] libmachine: domain pause-739105 has defined IP address 192.168.72.124 and MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:46.830558  294888 main.go:143] libmachine: Using SSH client type: native
	I1209 03:18:46.830851  294888 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.72.124 22 <nil> <nil>}
	I1209 03:18:46.830878  294888 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-739105' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-739105/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-739105' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 03:18:46.969510  294888 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 03:18:46.969554  294888 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22081-254936/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-254936/.minikube}
	I1209 03:18:46.969587  294888 buildroot.go:174] setting up certificates
	I1209 03:18:46.969600  294888 provision.go:84] configureAuth start
	I1209 03:18:46.973376  294888 main.go:143] libmachine: domain pause-739105 has defined MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:46.973904  294888 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:8f:9d", ip: ""} in network mk-pause-739105: {Iface:virbr4 ExpiryTime:2025-12-09 04:17:28 +0000 UTC Type:0 Mac:52:54:00:46:8f:9d Iaid: IPaddr:192.168.72.124 Prefix:24 Hostname:pause-739105 Clientid:01:52:54:00:46:8f:9d}
	I1209 03:18:46.973937  294888 main.go:143] libmachine: domain pause-739105 has defined IP address 192.168.72.124 and MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:46.976722  294888 main.go:143] libmachine: domain pause-739105 has defined MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:46.977225  294888 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:8f:9d", ip: ""} in network mk-pause-739105: {Iface:virbr4 ExpiryTime:2025-12-09 04:17:28 +0000 UTC Type:0 Mac:52:54:00:46:8f:9d Iaid: IPaddr:192.168.72.124 Prefix:24 Hostname:pause-739105 Clientid:01:52:54:00:46:8f:9d}
	I1209 03:18:46.977263  294888 main.go:143] libmachine: domain pause-739105 has defined IP address 192.168.72.124 and MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:46.977454  294888 provision.go:143] copyHostCerts
	I1209 03:18:46.977522  294888 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-254936/.minikube/ca.pem, removing ...
	I1209 03:18:46.977534  294888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-254936/.minikube/ca.pem
	I1209 03:18:46.977589  294888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/ca.pem (1078 bytes)
	I1209 03:18:46.977694  294888 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-254936/.minikube/cert.pem, removing ...
	I1209 03:18:46.977709  294888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-254936/.minikube/cert.pem
	I1209 03:18:46.977747  294888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/cert.pem (1123 bytes)
	I1209 03:18:46.977892  294888 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-254936/.minikube/key.pem, removing ...
	I1209 03:18:46.977904  294888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-254936/.minikube/key.pem
	I1209 03:18:46.977928  294888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/key.pem (1679 bytes)
	I1209 03:18:46.977979  294888 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca-key.pem org=jenkins.pause-739105 san=[127.0.0.1 192.168.72.124 localhost minikube pause-739105]
	I1209 03:18:47.069201  294888 provision.go:177] copyRemoteCerts
	I1209 03:18:47.069262  294888 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 03:18:47.072014  294888 main.go:143] libmachine: domain pause-739105 has defined MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:47.072452  294888 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:8f:9d", ip: ""} in network mk-pause-739105: {Iface:virbr4 ExpiryTime:2025-12-09 04:17:28 +0000 UTC Type:0 Mac:52:54:00:46:8f:9d Iaid: IPaddr:192.168.72.124 Prefix:24 Hostname:pause-739105 Clientid:01:52:54:00:46:8f:9d}
	I1209 03:18:47.072478  294888 main.go:143] libmachine: domain pause-739105 has defined IP address 192.168.72.124 and MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:47.072679  294888 sshutil.go:53] new ssh client: &{IP:192.168.72.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/pause-739105/id_rsa Username:docker}
	I1209 03:18:47.173590  294888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 03:18:47.212375  294888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1209 03:18:47.254842  294888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 03:18:47.302546  294888 provision.go:87] duration metric: took 332.928998ms to configureAuth
	I1209 03:18:47.302580  294888 buildroot.go:189] setting minikube options for container-runtime
	I1209 03:18:47.302787  294888 config.go:182] Loaded profile config "pause-739105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 03:18:47.306442  294888 main.go:143] libmachine: domain pause-739105 has defined MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:47.307045  294888 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:8f:9d", ip: ""} in network mk-pause-739105: {Iface:virbr4 ExpiryTime:2025-12-09 04:17:28 +0000 UTC Type:0 Mac:52:54:00:46:8f:9d Iaid: IPaddr:192.168.72.124 Prefix:24 Hostname:pause-739105 Clientid:01:52:54:00:46:8f:9d}
	I1209 03:18:47.307096  294888 main.go:143] libmachine: domain pause-739105 has defined IP address 192.168.72.124 and MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:47.307342  294888 main.go:143] libmachine: Using SSH client type: native
	I1209 03:18:47.307652  294888 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.72.124 22 <nil> <nil>}
	I1209 03:18:47.307738  294888 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 03:18:53.015067  294888 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 03:18:53.015097  294888 machine.go:97] duration metric: took 6.501414879s to provisionDockerMachine
	I1209 03:18:53.015116  294888 start.go:293] postStartSetup for "pause-739105" (driver="kvm2")
	I1209 03:18:53.015131  294888 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 03:18:53.015225  294888 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 03:18:53.018857  294888 main.go:143] libmachine: domain pause-739105 has defined MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:53.019481  294888 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:8f:9d", ip: ""} in network mk-pause-739105: {Iface:virbr4 ExpiryTime:2025-12-09 04:17:28 +0000 UTC Type:0 Mac:52:54:00:46:8f:9d Iaid: IPaddr:192.168.72.124 Prefix:24 Hostname:pause-739105 Clientid:01:52:54:00:46:8f:9d}
	I1209 03:18:53.019529  294888 main.go:143] libmachine: domain pause-739105 has defined IP address 192.168.72.124 and MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:53.019771  294888 sshutil.go:53] new ssh client: &{IP:192.168.72.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/pause-739105/id_rsa Username:docker}
	I1209 03:18:53.117288  294888 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 03:18:53.124711  294888 info.go:137] Remote host: Buildroot 2025.02
	I1209 03:18:53.124752  294888 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-254936/.minikube/addons for local assets ...
	I1209 03:18:53.124867  294888 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-254936/.minikube/files for local assets ...
	I1209 03:18:53.124956  294888 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-254936/.minikube/files/etc/ssl/certs/2588542.pem -> 2588542.pem in /etc/ssl/certs
	I1209 03:18:53.125065  294888 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 03:18:53.143008  294888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/files/etc/ssl/certs/2588542.pem --> /etc/ssl/certs/2588542.pem (1708 bytes)
	I1209 03:18:53.185989  294888 start.go:296] duration metric: took 170.830356ms for postStartSetup
	I1209 03:18:53.186047  294888 fix.go:56] duration metric: took 6.677495696s for fixHost
	I1209 03:18:53.189214  294888 main.go:143] libmachine: domain pause-739105 has defined MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:53.189699  294888 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:8f:9d", ip: ""} in network mk-pause-739105: {Iface:virbr4 ExpiryTime:2025-12-09 04:17:28 +0000 UTC Type:0 Mac:52:54:00:46:8f:9d Iaid: IPaddr:192.168.72.124 Prefix:24 Hostname:pause-739105 Clientid:01:52:54:00:46:8f:9d}
	I1209 03:18:53.189732  294888 main.go:143] libmachine: domain pause-739105 has defined IP address 192.168.72.124 and MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:53.190090  294888 main.go:143] libmachine: Using SSH client type: native
	I1209 03:18:53.190416  294888 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.72.124 22 <nil> <nil>}
	I1209 03:18:53.190433  294888 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1209 03:18:53.314375  294888 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765250333.309295590
	
	I1209 03:18:53.314402  294888 fix.go:216] guest clock: 1765250333.309295590
	I1209 03:18:53.314413  294888 fix.go:229] Guest: 2025-12-09 03:18:53.30929559 +0000 UTC Remote: 2025-12-09 03:18:53.186054245 +0000 UTC m=+13.810180360 (delta=123.241345ms)
	I1209 03:18:53.314437  294888 fix.go:200] guest clock delta is within tolerance: 123.241345ms
	I1209 03:18:53.314444  294888 start.go:83] releasing machines lock for "pause-739105", held for 6.805926474s
	I1209 03:18:53.317791  294888 main.go:143] libmachine: domain pause-739105 has defined MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:53.318470  294888 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:8f:9d", ip: ""} in network mk-pause-739105: {Iface:virbr4 ExpiryTime:2025-12-09 04:17:28 +0000 UTC Type:0 Mac:52:54:00:46:8f:9d Iaid: IPaddr:192.168.72.124 Prefix:24 Hostname:pause-739105 Clientid:01:52:54:00:46:8f:9d}
	I1209 03:18:53.318500  294888 main.go:143] libmachine: domain pause-739105 has defined IP address 192.168.72.124 and MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:53.319177  294888 ssh_runner.go:195] Run: cat /version.json
	I1209 03:18:53.319381  294888 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 03:18:53.323104  294888 main.go:143] libmachine: domain pause-739105 has defined MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:53.323409  294888 main.go:143] libmachine: domain pause-739105 has defined MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:53.323669  294888 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:8f:9d", ip: ""} in network mk-pause-739105: {Iface:virbr4 ExpiryTime:2025-12-09 04:17:28 +0000 UTC Type:0 Mac:52:54:00:46:8f:9d Iaid: IPaddr:192.168.72.124 Prefix:24 Hostname:pause-739105 Clientid:01:52:54:00:46:8f:9d}
	I1209 03:18:53.323709  294888 main.go:143] libmachine: domain pause-739105 has defined IP address 192.168.72.124 and MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:53.323882  294888 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:8f:9d", ip: ""} in network mk-pause-739105: {Iface:virbr4 ExpiryTime:2025-12-09 04:17:28 +0000 UTC Type:0 Mac:52:54:00:46:8f:9d Iaid: IPaddr:192.168.72.124 Prefix:24 Hostname:pause-739105 Clientid:01:52:54:00:46:8f:9d}
	I1209 03:18:53.323901  294888 sshutil.go:53] new ssh client: &{IP:192.168.72.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/pause-739105/id_rsa Username:docker}
	I1209 03:18:53.323925  294888 main.go:143] libmachine: domain pause-739105 has defined IP address 192.168.72.124 and MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:53.324115  294888 sshutil.go:53] new ssh client: &{IP:192.168.72.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/pause-739105/id_rsa Username:docker}
	I1209 03:18:53.415759  294888 ssh_runner.go:195] Run: systemctl --version
	I1209 03:18:53.441424  294888 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 03:18:53.603448  294888 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 03:18:53.615555  294888 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 03:18:53.615640  294888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 03:18:53.628017  294888 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 03:18:53.628050  294888 start.go:496] detecting cgroup driver to use...
	I1209 03:18:53.628143  294888 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 03:18:53.652636  294888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 03:18:53.671305  294888 docker.go:218] disabling cri-docker service (if available) ...
	I1209 03:18:53.671391  294888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 03:18:53.693025  294888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 03:18:53.710446  294888 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 03:18:53.917296  294888 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 03:18:54.126416  294888 docker.go:234] disabling docker service ...
	I1209 03:18:54.126501  294888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 03:18:54.165737  294888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 03:18:54.185097  294888 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 03:18:54.404273  294888 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 03:18:54.612387  294888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 03:18:54.632257  294888 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 03:18:54.664466  294888 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 03:18:54.664551  294888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:18:54.678357  294888 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 03:18:54.678461  294888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:18:54.695194  294888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:18:54.713981  294888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:18:54.731049  294888 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 03:18:54.749603  294888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:18:54.766361  294888 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:18:54.782600  294888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:18:54.802439  294888 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 03:18:54.820510  294888 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 03:18:54.840252  294888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:18:55.078796  294888 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 03:18:55.518809  294888 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 03:18:55.518922  294888 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 03:18:55.527536  294888 start.go:564] Will wait 60s for crictl version
	I1209 03:18:55.527634  294888 ssh_runner.go:195] Run: which crictl
	I1209 03:18:55.532780  294888 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 03:18:55.576037  294888 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 03:18:55.576157  294888 ssh_runner.go:195] Run: crio --version
	I1209 03:18:55.613786  294888 ssh_runner.go:195] Run: crio --version
	I1209 03:18:55.650451  294888 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1209 03:18:55.655332  294888 main.go:143] libmachine: domain pause-739105 has defined MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:55.655894  294888 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:8f:9d", ip: ""} in network mk-pause-739105: {Iface:virbr4 ExpiryTime:2025-12-09 04:17:28 +0000 UTC Type:0 Mac:52:54:00:46:8f:9d Iaid: IPaddr:192.168.72.124 Prefix:24 Hostname:pause-739105 Clientid:01:52:54:00:46:8f:9d}
	I1209 03:18:55.655943  294888 main.go:143] libmachine: domain pause-739105 has defined IP address 192.168.72.124 and MAC address 52:54:00:46:8f:9d in network mk-pause-739105
	I1209 03:18:55.656190  294888 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1209 03:18:55.663339  294888 kubeadm.go:884] updating cluster {Name:pause-739105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-739105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.124 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 03:18:55.663591  294888 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 03:18:55.663661  294888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 03:18:55.722471  294888 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 03:18:55.722504  294888 crio.go:433] Images already preloaded, skipping extraction
	I1209 03:18:55.722576  294888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 03:18:55.763297  294888 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 03:18:55.763321  294888 cache_images.go:86] Images are preloaded, skipping loading
	I1209 03:18:55.763330  294888 kubeadm.go:935] updating node { 192.168.72.124 8443 v1.34.2 crio true true} ...
	I1209 03:18:55.763453  294888 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-739105 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.124
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-739105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 03:18:55.763545  294888 ssh_runner.go:195] Run: crio config
	I1209 03:18:55.824952  294888 cni.go:84] Creating CNI manager for ""
	I1209 03:18:55.824993  294888 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 03:18:55.825017  294888 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 03:18:55.825055  294888 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.124 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-739105 NodeName:pause-739105 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.124"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.124 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 03:18:55.825289  294888 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.124
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-739105"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.124"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.124"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 03:18:55.825373  294888 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 03:18:55.843392  294888 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 03:18:55.843487  294888 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 03:18:55.856754  294888 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 03:18:55.883712  294888 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 03:18:55.908290  294888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1209 03:18:55.936377  294888 ssh_runner.go:195] Run: grep 192.168.72.124	control-plane.minikube.internal$ /etc/hosts
	I1209 03:18:55.943072  294888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:18:56.166029  294888 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 03:18:56.190156  294888 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/pause-739105 for IP: 192.168.72.124
	I1209 03:18:56.190191  294888 certs.go:195] generating shared ca certs ...
	I1209 03:18:56.190213  294888 certs.go:227] acquiring lock for ca certs: {Name:mk538e8c05758246ce904354c7e7ace78887d181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:18:56.190421  294888 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-254936/.minikube/ca.key
	I1209 03:18:56.190496  294888 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.key
	I1209 03:18:56.190510  294888 certs.go:257] generating profile certs ...
	I1209 03:18:56.190626  294888 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/pause-739105/client.key
	I1209 03:18:56.190711  294888 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/pause-739105/apiserver.key.7efacd4d
	I1209 03:18:56.190775  294888 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/pause-739105/proxy-client.key
	I1209 03:18:56.190948  294888 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/258854.pem (1338 bytes)
	W1209 03:18:56.191006  294888 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-254936/.minikube/certs/258854_empty.pem, impossibly tiny 0 bytes
	I1209 03:18:56.191020  294888 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 03:18:56.191056  294888 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem (1078 bytes)
	I1209 03:18:56.191090  294888 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem (1123 bytes)
	I1209 03:18:56.191126  294888 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/key.pem (1679 bytes)
	I1209 03:18:56.191185  294888 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/files/etc/ssl/certs/2588542.pem (1708 bytes)
	I1209 03:18:56.192076  294888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 03:18:56.227703  294888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 03:18:56.268792  294888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 03:18:56.402545  294888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 03:18:56.477662  294888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/pause-739105/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 03:18:56.517625  294888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/pause-739105/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 03:18:56.588392  294888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/pause-739105/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 03:18:56.716599  294888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/pause-739105/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 03:18:56.804408  294888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/files/etc/ssl/certs/2588542.pem --> /usr/share/ca-certificates/2588542.pem (1708 bytes)
	I1209 03:18:56.878964  294888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 03:18:56.960025  294888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/certs/258854.pem --> /usr/share/ca-certificates/258854.pem (1338 bytes)
	I1209 03:18:57.042880  294888 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 03:18:57.106964  294888 ssh_runner.go:195] Run: openssl version
	I1209 03:18:57.126037  294888 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2588542.pem
	I1209 03:18:57.168082  294888 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2588542.pem /etc/ssl/certs/2588542.pem
	I1209 03:18:57.194581  294888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2588542.pem
	I1209 03:18:57.211437  294888 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 02:16 /usr/share/ca-certificates/2588542.pem
	I1209 03:18:57.211526  294888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2588542.pem
	I1209 03:18:57.225759  294888 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 03:18:57.254977  294888 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 03:18:57.287109  294888 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 03:18:57.369463  294888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 03:18:57.391381  294888 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 01:56 /usr/share/ca-certificates/minikubeCA.pem
	I1209 03:18:57.391469  294888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 03:18:57.422477  294888 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 03:18:57.472845  294888 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/258854.pem
	I1209 03:18:57.525229  294888 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/258854.pem /etc/ssl/certs/258854.pem
	I1209 03:18:57.573054  294888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/258854.pem
	I1209 03:18:57.593012  294888 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 02:16 /usr/share/ca-certificates/258854.pem
	I1209 03:18:57.593108  294888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/258854.pem
	I1209 03:18:57.631116  294888 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 03:18:57.687522  294888 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 03:18:57.711241  294888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 03:18:57.729869  294888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 03:18:57.743522  294888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 03:18:57.759818  294888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 03:18:57.776582  294888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 03:18:57.790339  294888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 03:18:57.801328  294888 kubeadm.go:401] StartCluster: {Name:pause-739105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-739105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.124 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:18:57.801493  294888 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 03:18:57.801563  294888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 03:18:57.888191  294888 cri.go:89] found id: "a7ae140d28849706ed992a697e3d674291e73d88d78d98be55c0ec624c7bfa79"
	I1209 03:18:57.888216  294888 cri.go:89] found id: "de7292ca8714180c12d0930642833f614c66bde369219707289a884965f99ae3"
	I1209 03:18:57.888220  294888 cri.go:89] found id: "d4e129d74e7c8c6290797c64aa8bec8b1aff4105e7c26698b9f7aba0e9db897f"
	I1209 03:18:57.888224  294888 cri.go:89] found id: "4f488569925dc632f4dd1ca1b39df71b05e838a45e325da6c9ccc93996ce130c"
	I1209 03:18:57.888226  294888 cri.go:89] found id: "026aaeb74336572baaaf4524f3103e9e18100483e64abfe1932d87eab57a79dc"
	I1209 03:18:57.888230  294888 cri.go:89] found id: "14444bc2d3af3e65b99e16ca7ed91add6e58b282592f24742c4b426c0748bf39"
	I1209 03:18:57.888233  294888 cri.go:89] found id: "5138113956e089324ea0848fc4790e50de8b0ee09cf9fdd118b82f91c472562e"
	I1209 03:18:57.888236  294888 cri.go:89] found id: "18cfa357d2ab07d09e4da9dddc0d38271fe137c96d6622c238fbee708bf935f4"
	I1209 03:18:57.888239  294888 cri.go:89] found id: "376cab59933e3388b96f857dfa05e838511dd7b6779ffcac8c061855adc1855d"
	I1209 03:18:57.888247  294888 cri.go:89] found id: "e40b35dad2ad345edf9be43d0fb0d94f4e825b44eb65ddcec0728f0d726d297b"
	I1209 03:18:57.888250  294888 cri.go:89] found id: "e63cf1615052ef840d03a63a203cda43fe9bbcd1ed6faa309baacdada59acbcd"
	I1209 03:18:57.888259  294888 cri.go:89] found id: ""
	I1209 03:18:57.888321  294888 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-739105 -n pause-739105
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-739105 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-739105 logs -n 25: (3.080165731s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p NoKubernetes-992827                                                                                                                                                                                                  │ NoKubernetes-992827       │ jenkins │ v1.37.0 │ 09 Dec 25 03:15 UTC │ 09 Dec 25 03:15 UTC │
	│ start   │ -p NoKubernetes-992827 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                     │ NoKubernetes-992827       │ jenkins │ v1.37.0 │ 09 Dec 25 03:15 UTC │ 09 Dec 25 03:16 UTC │
	│ ssh     │ force-systemd-flag-150140 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                    │ force-systemd-flag-150140 │ jenkins │ v1.37.0 │ 09 Dec 25 03:15 UTC │ 09 Dec 25 03:15 UTC │
	│ delete  │ -p force-systemd-flag-150140                                                                                                                                                                                            │ force-systemd-flag-150140 │ jenkins │ v1.37.0 │ 09 Dec 25 03:15 UTC │ 09 Dec 25 03:15 UTC │
	│ start   │ -p cert-options-358032 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-358032       │ jenkins │ v1.37.0 │ 09 Dec 25 03:15 UTC │ 09 Dec 25 03:16 UTC │
	│ ssh     │ -p NoKubernetes-992827 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-992827       │ jenkins │ v1.37.0 │ 09 Dec 25 03:16 UTC │                     │
	│ stop    │ -p NoKubernetes-992827                                                                                                                                                                                                  │ NoKubernetes-992827       │ jenkins │ v1.37.0 │ 09 Dec 25 03:16 UTC │ 09 Dec 25 03:16 UTC │
	│ start   │ -p NoKubernetes-992827 --driver=kvm2  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-992827       │ jenkins │ v1.37.0 │ 09 Dec 25 03:16 UTC │ 09 Dec 25 03:16 UTC │
	│ ssh     │ -p NoKubernetes-992827 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-992827       │ jenkins │ v1.37.0 │ 09 Dec 25 03:16 UTC │                     │
	│ delete  │ -p NoKubernetes-992827                                                                                                                                                                                                  │ NoKubernetes-992827       │ jenkins │ v1.37.0 │ 09 Dec 25 03:16 UTC │ 09 Dec 25 03:16 UTC │
	│ start   │ -p kubernetes-upgrade-321262 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                  │ kubernetes-upgrade-321262 │ jenkins │ v1.37.0 │ 09 Dec 25 03:16 UTC │ 09 Dec 25 03:17 UTC │
	│ ssh     │ cert-options-358032 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                             │ cert-options-358032       │ jenkins │ v1.37.0 │ 09 Dec 25 03:16 UTC │ 09 Dec 25 03:16 UTC │
	│ ssh     │ -p cert-options-358032 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                           │ cert-options-358032       │ jenkins │ v1.37.0 │ 09 Dec 25 03:16 UTC │ 09 Dec 25 03:16 UTC │
	│ delete  │ -p cert-options-358032                                                                                                                                                                                                  │ cert-options-358032       │ jenkins │ v1.37.0 │ 09 Dec 25 03:16 UTC │ 09 Dec 25 03:16 UTC │
	│ start   │ -p pause-739105 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                 │ pause-739105              │ jenkins │ v1.37.0 │ 09 Dec 25 03:16 UTC │ 09 Dec 25 03:18 UTC │
	│ stop    │ -p kubernetes-upgrade-321262                                                                                                                                                                                            │ kubernetes-upgrade-321262 │ jenkins │ v1.37.0 │ 09 Dec 25 03:17 UTC │ 09 Dec 25 03:17 UTC │
	│ start   │ -p kubernetes-upgrade-321262 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                           │ kubernetes-upgrade-321262 │ jenkins │ v1.37.0 │ 09 Dec 25 03:17 UTC │ 09 Dec 25 03:18 UTC │
	│ start   │ -p kubernetes-upgrade-321262 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                                                                                         │ kubernetes-upgrade-321262 │ jenkins │ v1.37.0 │ 09 Dec 25 03:18 UTC │                     │
	│ start   │ -p kubernetes-upgrade-321262 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                           │ kubernetes-upgrade-321262 │ jenkins │ v1.37.0 │ 09 Dec 25 03:18 UTC │ 09 Dec 25 03:18 UTC │
	│ delete  │ -p kubernetes-upgrade-321262                                                                                                                                                                                            │ kubernetes-upgrade-321262 │ jenkins │ v1.37.0 │ 09 Dec 25 03:18 UTC │ 09 Dec 25 03:18 UTC │
	│ start   │ -p stopped-upgrade-644254 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-644254    │ jenkins │ v1.35.0 │ 09 Dec 25 03:18 UTC │ 09 Dec 25 03:19 UTC │
	│ start   │ -p pause-739105 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-739105              │ jenkins │ v1.37.0 │ 09 Dec 25 03:18 UTC │ 09 Dec 25 03:19 UTC │
	│ stop    │ stopped-upgrade-644254 stop                                                                                                                                                                                             │ stopped-upgrade-644254    │ jenkins │ v1.35.0 │ 09 Dec 25 03:19 UTC │ 09 Dec 25 03:19 UTC │
	│ start   │ -p stopped-upgrade-644254 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                  │ stopped-upgrade-644254    │ jenkins │ v1.37.0 │ 09 Dec 25 03:19 UTC │                     │
	│ start   │ -p cert-expiration-699833 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                                                                                 │ cert-expiration-699833    │ jenkins │ v1.37.0 │ 09 Dec 25 03:19 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 03:19:11
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 03:19:11.159161  295238 out.go:360] Setting OutFile to fd 1 ...
	I1209 03:19:11.159279  295238 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:19:11.159283  295238 out.go:374] Setting ErrFile to fd 2...
	I1209 03:19:11.159287  295238 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:19:11.159593  295238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 03:19:11.160351  295238 out.go:368] Setting JSON to false
	I1209 03:19:11.161716  295238 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":32501,"bootTime":1765217850,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 03:19:11.161787  295238 start.go:143] virtualization: kvm guest
	I1209 03:19:11.164912  295238 out.go:179] * [cert-expiration-699833] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 03:19:11.166722  295238 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 03:19:11.166737  295238 notify.go:221] Checking for updates...
	I1209 03:19:11.170050  295238 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:19:11.171845  295238 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 03:19:11.173233  295238 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	I1209 03:19:11.174650  295238 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 03:19:11.176092  295238 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:19:11.178509  295238 config.go:182] Loaded profile config "cert-expiration-699833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 03:19:11.179356  295238 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 03:19:11.218318  295238 out.go:179] * Using the kvm2 driver based on existing profile
	I1209 03:19:11.219755  295238 start.go:309] selected driver: kvm2
	I1209 03:19:11.219768  295238 start.go:927] validating driver "kvm2" against &{Name:cert-expiration-699833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.34.2 ClusterName:cert-expiration-699833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.113 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:19:11.219959  295238 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:19:11.221017  295238 cni.go:84] Creating CNI manager for ""
	I1209 03:19:11.221069  295238 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 03:19:11.221107  295238 start.go:353] cluster config:
	{Name:cert-expiration-699833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-699833 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.113 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:19:11.221212  295238 iso.go:125] acquiring lock: {Name:mk5e3a22cdf6cd1ed24c9a04adaf1049140c04b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:19:11.223204  295238 out.go:179] * Starting "cert-expiration-699833" primary control-plane node in "cert-expiration-699833" cluster
	I1209 03:19:10.808944  291970 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I1209 03:19:10.809756  291970 api_server.go:269] stopped: https://192.168.39.194:8443/healthz: Get "https://192.168.39.194:8443/healthz": dial tcp 192.168.39.194:8443: connect: connection refused
	I1209 03:19:10.809847  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 03:19:10.809916  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 03:19:10.857603  291970 cri.go:89] found id: "9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:10.857635  291970 cri.go:89] found id: ""
	I1209 03:19:10.857646  291970 logs.go:282] 1 containers: [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb]
	I1209 03:19:10.857752  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:10.862967  291970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 03:19:10.863073  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 03:19:10.911502  291970 cri.go:89] found id: "ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:10.911529  291970 cri.go:89] found id: ""
	I1209 03:19:10.911538  291970 logs.go:282] 1 containers: [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38]
	I1209 03:19:10.911615  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:10.917031  291970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 03:19:10.917128  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 03:19:10.976568  291970 cri.go:89] found id: "02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:10.976602  291970 cri.go:89] found id: ""
	I1209 03:19:10.976615  291970 logs.go:282] 1 containers: [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb]
	I1209 03:19:10.976697  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:10.982080  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 03:19:10.982170  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 03:19:11.035049  291970 cri.go:89] found id: "252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:11.035078  291970 cri.go:89] found id: "3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:11.035082  291970 cri.go:89] found id: ""
	I1209 03:19:11.035090  291970 logs.go:282] 2 containers: [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81]
	I1209 03:19:11.035157  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:11.040279  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:11.045064  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 03:19:11.045151  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 03:19:11.085504  291970 cri.go:89] found id: "7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:11.085529  291970 cri.go:89] found id: ""
	I1209 03:19:11.085538  291970 logs.go:282] 1 containers: [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18]
	I1209 03:19:11.085596  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:11.090571  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 03:19:11.090642  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 03:19:11.149876  291970 cri.go:89] found id: "6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:11.149902  291970 cri.go:89] found id: ""
	I1209 03:19:11.149911  291970 logs.go:282] 1 containers: [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50]
	I1209 03:19:11.149976  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:11.158605  291970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 03:19:11.158684  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 03:19:11.209512  291970 cri.go:89] found id: ""
	I1209 03:19:11.209545  291970 logs.go:282] 0 containers: []
	W1209 03:19:11.209557  291970 logs.go:284] No container was found matching "kindnet"
	I1209 03:19:11.209564  291970 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 03:19:11.209631  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 03:19:11.257904  291970 cri.go:89] found id: "3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:11.257939  291970 cri.go:89] found id: ""
	I1209 03:19:11.257952  291970 logs.go:282] 1 containers: [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9]
	I1209 03:19:11.258048  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:11.264035  291970 logs.go:123] Gathering logs for kubelet ...
	I1209 03:19:11.264065  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:19:11.378322  291970 logs.go:123] Gathering logs for dmesg ...
	I1209 03:19:11.378366  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:19:11.397556  291970 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:19:11.397612  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 03:19:11.482941  291970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 03:19:11.482972  291970 logs.go:123] Gathering logs for kube-apiserver [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb] ...
	I1209 03:19:11.483000  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:11.529147  291970 logs.go:123] Gathering logs for kube-scheduler [3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81] ...
	I1209 03:19:11.529185  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:11.589466  291970 logs.go:123] Gathering logs for kube-proxy [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18] ...
	I1209 03:19:11.589501  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:11.634447  291970 logs.go:123] Gathering logs for kube-controller-manager [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50] ...
	I1209 03:19:11.634487  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:11.677926  291970 logs.go:123] Gathering logs for etcd [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38] ...
	I1209 03:19:11.677975  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:11.741376  291970 logs.go:123] Gathering logs for coredns [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb] ...
	I1209 03:19:11.741436  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:11.788015  291970 logs.go:123] Gathering logs for kube-scheduler [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8] ...
	I1209 03:19:11.788057  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:11.875753  291970 logs.go:123] Gathering logs for storage-provisioner [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9] ...
	I1209 03:19:11.875797  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:11.924741  291970 logs.go:123] Gathering logs for CRI-O ...
	I1209 03:19:11.924781  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 03:19:12.291197  291970 logs.go:123] Gathering logs for container status ...
	I1209 03:19:12.291261  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:19:12.180443  294888 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 a7ae140d28849706ed992a697e3d674291e73d88d78d98be55c0ec624c7bfa79 de7292ca8714180c12d0930642833f614c66bde369219707289a884965f99ae3 d4e129d74e7c8c6290797c64aa8bec8b1aff4105e7c26698b9f7aba0e9db897f 4f488569925dc632f4dd1ca1b39df71b05e838a45e325da6c9ccc93996ce130c 026aaeb74336572baaaf4524f3103e9e18100483e64abfe1932d87eab57a79dc 14444bc2d3af3e65b99e16ca7ed91add6e58b282592f24742c4b426c0748bf39 5138113956e089324ea0848fc4790e50de8b0ee09cf9fdd118b82f91c472562e 18cfa357d2ab07d09e4da9dddc0d38271fe137c96d6622c238fbee708bf935f4 376cab59933e3388b96f857dfa05e838511dd7b6779ffcac8c061855adc1855d e40b35dad2ad345edf9be43d0fb0d94f4e825b44eb65ddcec0728f0d726d297b e63cf1615052ef840d03a63a203cda43fe9bbcd1ed6faa309baacdada59acbcd: (14.059466258s)
	W1209 03:19:12.180545  294888 kubeadm.go:649] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 a7ae140d28849706ed992a697e3d674291e73d88d78d98be55c0ec624c7bfa79 de7292ca8714180c12d0930642833f614c66bde369219707289a884965f99ae3 d4e129d74e7c8c6290797c64aa8bec8b1aff4105e7c26698b9f7aba0e9db897f 4f488569925dc632f4dd1ca1b39df71b05e838a45e325da6c9ccc93996ce130c 026aaeb74336572baaaf4524f3103e9e18100483e64abfe1932d87eab57a79dc 14444bc2d3af3e65b99e16ca7ed91add6e58b282592f24742c4b426c0748bf39 5138113956e089324ea0848fc4790e50de8b0ee09cf9fdd118b82f91c472562e 18cfa357d2ab07d09e4da9dddc0d38271fe137c96d6622c238fbee708bf935f4 376cab59933e3388b96f857dfa05e838511dd7b6779ffcac8c061855adc1855d e40b35dad2ad345edf9be43d0fb0d94f4e825b44eb65ddcec0728f0d726d297b e63cf1615052ef840d03a63a203cda43fe9bbcd1ed6faa309baacdada59acbcd: Process exited with status 1
	stdout:
	a7ae140d28849706ed992a697e3d674291e73d88d78d98be55c0ec624c7bfa79
	de7292ca8714180c12d0930642833f614c66bde369219707289a884965f99ae3
	d4e129d74e7c8c6290797c64aa8bec8b1aff4105e7c26698b9f7aba0e9db897f
	4f488569925dc632f4dd1ca1b39df71b05e838a45e325da6c9ccc93996ce130c
	026aaeb74336572baaaf4524f3103e9e18100483e64abfe1932d87eab57a79dc
	14444bc2d3af3e65b99e16ca7ed91add6e58b282592f24742c4b426c0748bf39
	
	stderr:
	E1209 03:19:12.172496    3620 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5138113956e089324ea0848fc4790e50de8b0ee09cf9fdd118b82f91c472562e\": container with ID starting with 5138113956e089324ea0848fc4790e50de8b0ee09cf9fdd118b82f91c472562e not found: ID does not exist" containerID="5138113956e089324ea0848fc4790e50de8b0ee09cf9fdd118b82f91c472562e"
	time="2025-12-09T03:19:12Z" level=fatal msg="stopping the container \"5138113956e089324ea0848fc4790e50de8b0ee09cf9fdd118b82f91c472562e\": rpc error: code = NotFound desc = could not find container \"5138113956e089324ea0848fc4790e50de8b0ee09cf9fdd118b82f91c472562e\": container with ID starting with 5138113956e089324ea0848fc4790e50de8b0ee09cf9fdd118b82f91c472562e not found: ID does not exist"
	I1209 03:19:12.180651  294888 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 03:19:12.224910  294888 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 03:19:12.243921  294888 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  9 03:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5638 Dec  9 03:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1954 Dec  9 03:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5586 Dec  9 03:17 /etc/kubernetes/scheduler.conf
	
	I1209 03:19:12.244014  294888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 03:19:12.260292  294888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 03:19:12.276150  294888 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 03:19:12.276246  294888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 03:19:12.295723  294888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 03:19:12.312370  294888 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 03:19:12.312451  294888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 03:19:12.329403  294888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 03:19:12.344645  294888 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 03:19:12.344741  294888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 03:19:12.362329  294888 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 03:19:12.378544  294888 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:19:12.439535  294888 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:19:14.238369  294888 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.798779031s)
	I1209 03:19:14.238461  294888 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:19:10.084020  295180 out.go:252] * Restarting existing kvm2 VM for "stopped-upgrade-644254" ...
	I1209 03:19:10.084137  295180 main.go:143] libmachine: starting domain...
	I1209 03:19:10.084156  295180 main.go:143] libmachine: ensuring networks are active...
	I1209 03:19:10.085277  295180 main.go:143] libmachine: Ensuring network default is active
	I1209 03:19:10.085804  295180 main.go:143] libmachine: Ensuring network mk-stopped-upgrade-644254 is active
	I1209 03:19:10.086460  295180 main.go:143] libmachine: getting domain XML...
	I1209 03:19:10.087737  295180 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>stopped-upgrade-644254</name>
	  <uuid>03069003-742c-4b71-8624-52d7d2c4f9eb</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22081-254936/.minikube/machines/stopped-upgrade-644254/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22081-254936/.minikube/machines/stopped-upgrade-644254/stopped-upgrade-644254.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:f1:f3:a2'/>
	      <source network='mk-stopped-upgrade-644254'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:45:10:64'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1209 03:19:11.593103  295180 main.go:143] libmachine: waiting for domain to start...
	I1209 03:19:11.594792  295180 main.go:143] libmachine: domain is now running
	I1209 03:19:11.594809  295180 main.go:143] libmachine: waiting for IP...
	I1209 03:19:11.595805  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:11.596509  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has current primary IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:11.596529  295180 main.go:143] libmachine: found domain IP: 192.168.61.28
	I1209 03:19:11.596537  295180 main.go:143] libmachine: reserving static IP address...
	I1209 03:19:11.596993  295180 main.go:143] libmachine: found host DHCP lease matching {name: "stopped-upgrade-644254", mac: "52:54:00:f1:f3:a2", ip: "192.168.61.28"} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:18:44 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:11.597044  295180 main.go:143] libmachine: skip adding static IP to network mk-stopped-upgrade-644254 - found existing host DHCP lease matching {name: "stopped-upgrade-644254", mac: "52:54:00:f1:f3:a2", ip: "192.168.61.28"}
	I1209 03:19:11.597057  295180 main.go:143] libmachine: reserved static IP address 192.168.61.28 for domain stopped-upgrade-644254
	I1209 03:19:11.597068  295180 main.go:143] libmachine: waiting for SSH...
	I1209 03:19:11.597076  295180 main.go:143] libmachine: Getting to WaitForSSH function...
	I1209 03:19:11.600041  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:11.600641  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:18:44 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:11.600681  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:11.600979  295180 main.go:143] libmachine: Using SSH client type: native
	I1209 03:19:11.601347  295180 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I1209 03:19:11.601372  295180 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1209 03:19:14.704103  295180 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.61.28:22: connect: no route to host
	I1209 03:19:11.224687  295238 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 03:19:11.224721  295238 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 03:19:11.224742  295238 cache.go:65] Caching tarball of preloaded images
	I1209 03:19:11.224927  295238 preload.go:238] Found /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 03:19:11.224939  295238 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 03:19:11.225089  295238 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/cert-expiration-699833/config.json ...
	I1209 03:19:11.225427  295238 start.go:360] acquireMachinesLock for cert-expiration-699833: {Name:mkb4bf4bc2a6ad90b53de9be214957ca6809cd32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:19:14.856908  291970 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I1209 03:19:14.857647  291970 api_server.go:269] stopped: https://192.168.39.194:8443/healthz: Get "https://192.168.39.194:8443/healthz": dial tcp 192.168.39.194:8443: connect: connection refused
	I1209 03:19:14.857712  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 03:19:14.857764  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 03:19:14.911951  291970 cri.go:89] found id: "9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:14.911984  291970 cri.go:89] found id: ""
	I1209 03:19:14.912008  291970 logs.go:282] 1 containers: [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb]
	I1209 03:19:14.912084  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:14.916641  291970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 03:19:14.916724  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 03:19:14.960559  291970 cri.go:89] found id: "ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:14.960591  291970 cri.go:89] found id: ""
	I1209 03:19:14.960601  291970 logs.go:282] 1 containers: [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38]
	I1209 03:19:14.960680  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:14.966648  291970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 03:19:14.966750  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 03:19:15.009754  291970 cri.go:89] found id: "02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:15.009785  291970 cri.go:89] found id: ""
	I1209 03:19:15.009797  291970 logs.go:282] 1 containers: [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb]
	I1209 03:19:15.009881  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:15.015933  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 03:19:15.016013  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 03:19:15.070436  291970 cri.go:89] found id: "252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:15.070466  291970 cri.go:89] found id: "3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:15.070471  291970 cri.go:89] found id: ""
	I1209 03:19:15.070481  291970 logs.go:282] 2 containers: [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81]
	I1209 03:19:15.070548  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:15.076510  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:15.082037  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 03:19:15.082126  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 03:19:15.127203  291970 cri.go:89] found id: "7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:15.127236  291970 cri.go:89] found id: ""
	I1209 03:19:15.127249  291970 logs.go:282] 1 containers: [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18]
	I1209 03:19:15.127332  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:15.133987  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 03:19:15.134065  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 03:19:15.180414  291970 cri.go:89] found id: "6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:15.180443  291970 cri.go:89] found id: ""
	I1209 03:19:15.180455  291970 logs.go:282] 1 containers: [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50]
	I1209 03:19:15.180526  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:15.186428  291970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 03:19:15.186537  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 03:19:15.240543  291970 cri.go:89] found id: ""
	I1209 03:19:15.240574  291970 logs.go:282] 0 containers: []
	W1209 03:19:15.240586  291970 logs.go:284] No container was found matching "kindnet"
	I1209 03:19:15.240594  291970 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 03:19:15.240657  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 03:19:15.296407  291970 cri.go:89] found id: "3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:15.296440  291970 cri.go:89] found id: ""
	I1209 03:19:15.296451  291970 logs.go:282] 1 containers: [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9]
	I1209 03:19:15.296528  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:15.302721  291970 logs.go:123] Gathering logs for etcd [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38] ...
	I1209 03:19:15.302755  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:15.372638  291970 logs.go:123] Gathering logs for coredns [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb] ...
	I1209 03:19:15.372691  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:15.420693  291970 logs.go:123] Gathering logs for kube-proxy [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18] ...
	I1209 03:19:15.420732  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:15.469893  291970 logs.go:123] Gathering logs for storage-provisioner [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9] ...
	I1209 03:19:15.469948  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:15.521259  291970 logs.go:123] Gathering logs for CRI-O ...
	I1209 03:19:15.521302  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 03:19:15.904296  291970 logs.go:123] Gathering logs for container status ...
	I1209 03:19:15.904341  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:19:15.957536  291970 logs.go:123] Gathering logs for dmesg ...
	I1209 03:19:15.957579  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:19:15.980131  291970 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:19:15.980178  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 03:19:16.059315  291970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 03:19:16.059349  291970 logs.go:123] Gathering logs for kube-scheduler [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8] ...
	I1209 03:19:16.059368  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:16.169095  291970 logs.go:123] Gathering logs for kube-scheduler [3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81] ...
	I1209 03:19:16.169144  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:16.229345  291970 logs.go:123] Gathering logs for kube-controller-manager [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50] ...
	I1209 03:19:16.229398  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:16.282639  291970 logs.go:123] Gathering logs for kubelet ...
	I1209 03:19:16.282675  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:19:16.426733  291970 logs.go:123] Gathering logs for kube-apiserver [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb] ...
	I1209 03:19:16.426780  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:14.569336  294888 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:19:14.642490  294888 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:19:14.752905  294888 api_server.go:52] waiting for apiserver process to appear ...
	I1209 03:19:14.753027  294888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:19:15.254034  294888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:19:15.753198  294888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:19:15.799437  294888 api_server.go:72] duration metric: took 1.046549209s to wait for apiserver process to appear ...
	I1209 03:19:15.799468  294888 api_server.go:88] waiting for apiserver healthz status ...
	I1209 03:19:15.799493  294888 api_server.go:253] Checking apiserver healthz at https://192.168.72.124:8443/healthz ...
	I1209 03:19:18.508800  294888 api_server.go:279] https://192.168.72.124:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 03:19:18.508852  294888 api_server.go:103] status: https://192.168.72.124:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 03:19:18.508872  294888 api_server.go:253] Checking apiserver healthz at https://192.168.72.124:8443/healthz ...
	I1209 03:19:18.557050  294888 api_server.go:279] https://192.168.72.124:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 03:19:18.557090  294888 api_server.go:103] status: https://192.168.72.124:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 03:19:18.800529  294888 api_server.go:253] Checking apiserver healthz at https://192.168.72.124:8443/healthz ...
	I1209 03:19:18.806571  294888 api_server.go:279] https://192.168.72.124:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 03:19:18.806602  294888 api_server.go:103] status: https://192.168.72.124:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 03:19:19.300341  294888 api_server.go:253] Checking apiserver healthz at https://192.168.72.124:8443/healthz ...
	I1209 03:19:19.306985  294888 api_server.go:279] https://192.168.72.124:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 03:19:19.307027  294888 api_server.go:103] status: https://192.168.72.124:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 03:19:19.799767  294888 api_server.go:253] Checking apiserver healthz at https://192.168.72.124:8443/healthz ...
	I1209 03:19:19.805846  294888 api_server.go:279] https://192.168.72.124:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 03:19:19.805883  294888 api_server.go:103] status: https://192.168.72.124:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 03:19:20.299551  294888 api_server.go:253] Checking apiserver healthz at https://192.168.72.124:8443/healthz ...
	I1209 03:19:20.306319  294888 api_server.go:279] https://192.168.72.124:8443/healthz returned 200:
	ok
	I1209 03:19:20.319019  294888 api_server.go:141] control plane version: v1.34.2
	I1209 03:19:20.319057  294888 api_server.go:131] duration metric: took 4.5195811s to wait for apiserver health ...
	I1209 03:19:20.319069  294888 cni.go:84] Creating CNI manager for ""
	I1209 03:19:20.319078  294888 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 03:19:20.321406  294888 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 03:19:20.322999  294888 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 03:19:20.342654  294888 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 03:19:20.373443  294888 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 03:19:20.382359  294888 system_pods.go:59] 6 kube-system pods found
	I1209 03:19:20.382423  294888 system_pods.go:61] "coredns-66bc5c9577-pt698" [d79e9e39-615a-4e96-afd4-3b7e856cc3f4] Running
	I1209 03:19:20.382444  294888 system_pods.go:61] "etcd-pause-739105" [dce64bb8-662c-4e83-87d0-fa92866158e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 03:19:20.382458  294888 system_pods.go:61] "kube-apiserver-pause-739105" [e5bcabca-af2b-4f32-a16e-505e11121da2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 03:19:20.382475  294888 system_pods.go:61] "kube-controller-manager-pause-739105" [d2343ee0-bbbe-4f54-99ed-558aac463ec4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 03:19:20.382489  294888 system_pods.go:61] "kube-proxy-rxfdq" [ad6d4576-8e92-4abd-8193-d8b9ddd7266d] Running
	I1209 03:19:20.382504  294888 system_pods.go:61] "kube-scheduler-pause-739105" [2dfa00a8-526b-4380-bc39-645001782835] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 03:19:20.382515  294888 system_pods.go:74] duration metric: took 9.041331ms to wait for pod list to return data ...
	I1209 03:19:20.382531  294888 node_conditions.go:102] verifying NodePressure condition ...
	I1209 03:19:20.387618  294888 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 03:19:20.387657  294888 node_conditions.go:123] node cpu capacity is 2
	I1209 03:19:20.387676  294888 node_conditions.go:105] duration metric: took 5.138717ms to run NodePressure ...
	I1209 03:19:20.387748  294888 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:19:20.716929  294888 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1209 03:19:20.722989  294888 kubeadm.go:744] kubelet initialised
	I1209 03:19:20.723024  294888 kubeadm.go:745] duration metric: took 6.058648ms waiting for restarted kubelet to initialise ...
	I1209 03:19:20.723049  294888 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 03:19:20.757220  294888 ops.go:34] apiserver oom_adj: -16
	I1209 03:19:20.757255  294888 kubeadm.go:602] duration metric: took 22.789016706s to restartPrimaryControlPlane
	I1209 03:19:20.757270  294888 kubeadm.go:403] duration metric: took 22.955955066s to StartCluster
	I1209 03:19:20.757294  294888 settings.go:142] acquiring lock: {Name:mkec34d0133156567c6c6050ab2f8de3f197c63b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:19:20.757394  294888 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 03:19:20.758934  294888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/kubeconfig: {Name:mkaafbe94dbea876978b17d37022d815642e1aad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:19:20.759300  294888 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.72.124 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 03:19:20.759452  294888 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 03:19:20.759570  294888 config.go:182] Loaded profile config "pause-739105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 03:19:20.761167  294888 out.go:179] * Verifying Kubernetes components...
	I1209 03:19:20.761203  294888 out.go:179] * Enabled addons: 
	I1209 03:19:18.984449  291970 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I1209 03:19:18.985280  291970 api_server.go:269] stopped: https://192.168.39.194:8443/healthz: Get "https://192.168.39.194:8443/healthz": dial tcp 192.168.39.194:8443: connect: connection refused
	I1209 03:19:18.985347  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 03:19:18.985414  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 03:19:19.030950  291970 cri.go:89] found id: "9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:19.030975  291970 cri.go:89] found id: ""
	I1209 03:19:19.030984  291970 logs.go:282] 1 containers: [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb]
	I1209 03:19:19.031057  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:19.037226  291970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 03:19:19.037325  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 03:19:19.085128  291970 cri.go:89] found id: "ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:19.085160  291970 cri.go:89] found id: ""
	I1209 03:19:19.085172  291970 logs.go:282] 1 containers: [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38]
	I1209 03:19:19.085253  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:19.091568  291970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 03:19:19.091668  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 03:19:19.159167  291970 cri.go:89] found id: "02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:19.159201  291970 cri.go:89] found id: ""
	I1209 03:19:19.159214  291970 logs.go:282] 1 containers: [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb]
	I1209 03:19:19.159300  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:19.164545  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 03:19:19.164653  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 03:19:19.211716  291970 cri.go:89] found id: "252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:19.211744  291970 cri.go:89] found id: "3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:19.211749  291970 cri.go:89] found id: ""
	I1209 03:19:19.211760  291970 logs.go:282] 2 containers: [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81]
	I1209 03:19:19.211855  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:19.218343  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:19.224001  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 03:19:19.224089  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 03:19:19.271073  291970 cri.go:89] found id: "7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:19.271105  291970 cri.go:89] found id: ""
	I1209 03:19:19.271115  291970 logs.go:282] 1 containers: [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18]
	I1209 03:19:19.271183  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:19.275972  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 03:19:19.276062  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 03:19:19.323127  291970 cri.go:89] found id: "6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:19.323162  291970 cri.go:89] found id: ""
	I1209 03:19:19.323174  291970 logs.go:282] 1 containers: [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50]
	I1209 03:19:19.323242  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:19.328603  291970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 03:19:19.328699  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 03:19:19.381128  291970 cri.go:89] found id: ""
	I1209 03:19:19.381159  291970 logs.go:282] 0 containers: []
	W1209 03:19:19.381170  291970 logs.go:284] No container was found matching "kindnet"
	I1209 03:19:19.381177  291970 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 03:19:19.381278  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 03:19:19.425899  291970 cri.go:89] found id: "3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:19.425930  291970 cri.go:89] found id: ""
	I1209 03:19:19.425941  291970 logs.go:282] 1 containers: [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9]
	I1209 03:19:19.426015  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:19.430521  291970 logs.go:123] Gathering logs for kube-scheduler [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8] ...
	I1209 03:19:19.430548  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:19.537382  291970 logs.go:123] Gathering logs for kube-scheduler [3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81] ...
	I1209 03:19:19.537443  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:19.615658  291970 logs.go:123] Gathering logs for kube-proxy [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18] ...
	I1209 03:19:19.615797  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:19.678131  291970 logs.go:123] Gathering logs for kube-controller-manager [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50] ...
	I1209 03:19:19.678189  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:19.729143  291970 logs.go:123] Gathering logs for container status ...
	I1209 03:19:19.729188  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:19:19.782856  291970 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:19:19.782902  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 03:19:19.899676  291970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 03:19:19.899711  291970 logs.go:123] Gathering logs for etcd [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38] ...
	I1209 03:19:19.899734  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:19.952579  291970 logs.go:123] Gathering logs for coredns [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb] ...
	I1209 03:19:19.952620  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:19.997188  291970 logs.go:123] Gathering logs for storage-provisioner [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9] ...
	I1209 03:19:19.997240  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:20.048105  291970 logs.go:123] Gathering logs for CRI-O ...
	I1209 03:19:20.048139  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 03:19:20.402033  291970 logs.go:123] Gathering logs for kubelet ...
	I1209 03:19:20.402087  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:19:20.519493  291970 logs.go:123] Gathering logs for dmesg ...
	I1209 03:19:20.519549  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:19:20.543912  291970 logs.go:123] Gathering logs for kube-apiserver [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb] ...
	I1209 03:19:20.543965  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:23.109313  291970 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I1209 03:19:23.110107  291970 api_server.go:269] stopped: https://192.168.39.194:8443/healthz: Get "https://192.168.39.194:8443/healthz": dial tcp 192.168.39.194:8443: connect: connection refused
	I1209 03:19:23.110198  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 03:19:23.110277  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 03:19:23.175043  291970 cri.go:89] found id: "9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:23.175068  291970 cri.go:89] found id: ""
	I1209 03:19:23.175077  291970 logs.go:282] 1 containers: [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb]
	I1209 03:19:23.175145  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:23.181920  291970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 03:19:23.182029  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 03:19:23.229901  291970 cri.go:89] found id: "ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:23.229934  291970 cri.go:89] found id: ""
	I1209 03:19:23.229946  291970 logs.go:282] 1 containers: [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38]
	I1209 03:19:23.230023  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:23.235301  291970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 03:19:23.235394  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 03:19:23.288345  291970 cri.go:89] found id: "02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:23.288377  291970 cri.go:89] found id: ""
	I1209 03:19:23.288388  291970 logs.go:282] 1 containers: [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb]
	I1209 03:19:23.288463  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:23.293812  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 03:19:23.294040  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 03:19:23.356627  291970 cri.go:89] found id: "252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:23.356658  291970 cri.go:89] found id: "3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:23.356666  291970 cri.go:89] found id: ""
	I1209 03:19:23.356678  291970 logs.go:282] 2 containers: [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81]
	I1209 03:19:23.356758  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:23.363202  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:23.370013  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 03:19:23.370103  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 03:19:23.427699  291970 cri.go:89] found id: "7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:23.427730  291970 cri.go:89] found id: ""
	I1209 03:19:23.427741  291970 logs.go:282] 1 containers: [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18]
	I1209 03:19:23.427817  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:23.435644  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 03:19:23.435753  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 03:19:23.482594  291970 cri.go:89] found id: "6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:23.482629  291970 cri.go:89] found id: ""
	I1209 03:19:23.482642  291970 logs.go:282] 1 containers: [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50]
	I1209 03:19:23.482720  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:23.488088  291970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 03:19:23.488184  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 03:19:23.527661  291970 cri.go:89] found id: ""
	I1209 03:19:23.527686  291970 logs.go:282] 0 containers: []
	W1209 03:19:23.527695  291970 logs.go:284] No container was found matching "kindnet"
	I1209 03:19:23.527701  291970 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 03:19:23.527756  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 03:19:23.574728  291970 cri.go:89] found id: "3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:23.574757  291970 cri.go:89] found id: ""
	I1209 03:19:23.574768  291970 logs.go:282] 1 containers: [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9]
	I1209 03:19:23.574861  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:23.579819  291970 logs.go:123] Gathering logs for dmesg ...
	I1209 03:19:23.579869  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:19:23.599801  291970 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:19:23.599871  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:19:20.762603  294888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:19:20.762602  294888 addons.go:530] duration metric: took 3.168533ms for enable addons: enabled=[]
	I1209 03:19:21.013543  294888 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 03:19:21.046017  294888 node_ready.go:35] waiting up to 6m0s for node "pause-739105" to be "Ready" ...
	I1209 03:19:21.049431  294888 node_ready.go:49] node "pause-739105" is "Ready"
	I1209 03:19:21.049464  294888 node_ready.go:38] duration metric: took 3.383872ms for node "pause-739105" to be "Ready" ...
	I1209 03:19:21.049481  294888 api_server.go:52] waiting for apiserver process to appear ...
	I1209 03:19:21.049535  294888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:19:21.071310  294888 api_server.go:72] duration metric: took 311.962801ms to wait for apiserver process to appear ...
	I1209 03:19:21.071344  294888 api_server.go:88] waiting for apiserver healthz status ...
	I1209 03:19:21.071372  294888 api_server.go:253] Checking apiserver healthz at https://192.168.72.124:8443/healthz ...
	I1209 03:19:21.085102  294888 api_server.go:279] https://192.168.72.124:8443/healthz returned 200:
	ok
	I1209 03:19:21.086551  294888 api_server.go:141] control plane version: v1.34.2
	I1209 03:19:21.086577  294888 api_server.go:131] duration metric: took 15.226442ms to wait for apiserver health ...
	I1209 03:19:21.086587  294888 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 03:19:21.092727  294888 system_pods.go:59] 6 kube-system pods found
	I1209 03:19:21.092753  294888 system_pods.go:61] "coredns-66bc5c9577-pt698" [d79e9e39-615a-4e96-afd4-3b7e856cc3f4] Running
	I1209 03:19:21.092762  294888 system_pods.go:61] "etcd-pause-739105" [dce64bb8-662c-4e83-87d0-fa92866158e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 03:19:21.092769  294888 system_pods.go:61] "kube-apiserver-pause-739105" [e5bcabca-af2b-4f32-a16e-505e11121da2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 03:19:21.092777  294888 system_pods.go:61] "kube-controller-manager-pause-739105" [d2343ee0-bbbe-4f54-99ed-558aac463ec4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 03:19:21.092781  294888 system_pods.go:61] "kube-proxy-rxfdq" [ad6d4576-8e92-4abd-8193-d8b9ddd7266d] Running
	I1209 03:19:21.092788  294888 system_pods.go:61] "kube-scheduler-pause-739105" [2dfa00a8-526b-4380-bc39-645001782835] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 03:19:21.092793  294888 system_pods.go:74] duration metric: took 6.20065ms to wait for pod list to return data ...
	I1209 03:19:21.092803  294888 default_sa.go:34] waiting for default service account to be created ...
	I1209 03:19:21.095508  294888 default_sa.go:45] found service account: "default"
	I1209 03:19:21.095531  294888 default_sa.go:55] duration metric: took 2.721055ms for default service account to be created ...
	I1209 03:19:21.095542  294888 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 03:19:21.099792  294888 system_pods.go:86] 6 kube-system pods found
	I1209 03:19:21.099820  294888 system_pods.go:89] "coredns-66bc5c9577-pt698" [d79e9e39-615a-4e96-afd4-3b7e856cc3f4] Running
	I1209 03:19:21.099858  294888 system_pods.go:89] "etcd-pause-739105" [dce64bb8-662c-4e83-87d0-fa92866158e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 03:19:21.099867  294888 system_pods.go:89] "kube-apiserver-pause-739105" [e5bcabca-af2b-4f32-a16e-505e11121da2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 03:19:21.099896  294888 system_pods.go:89] "kube-controller-manager-pause-739105" [d2343ee0-bbbe-4f54-99ed-558aac463ec4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 03:19:21.099903  294888 system_pods.go:89] "kube-proxy-rxfdq" [ad6d4576-8e92-4abd-8193-d8b9ddd7266d] Running
	I1209 03:19:21.099913  294888 system_pods.go:89] "kube-scheduler-pause-739105" [2dfa00a8-526b-4380-bc39-645001782835] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 03:19:21.099934  294888 system_pods.go:126] duration metric: took 4.374846ms to wait for k8s-apps to be running ...
	I1209 03:19:21.099950  294888 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 03:19:21.100015  294888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 03:19:21.117618  294888 system_svc.go:56] duration metric: took 17.654937ms WaitForService to wait for kubelet
	I1209 03:19:21.117655  294888 kubeadm.go:587] duration metric: took 358.316779ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:19:21.117672  294888 node_conditions.go:102] verifying NodePressure condition ...
	I1209 03:19:21.120626  294888 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 03:19:21.120658  294888 node_conditions.go:123] node cpu capacity is 2
	I1209 03:19:21.120675  294888 node_conditions.go:105] duration metric: took 2.997144ms to run NodePressure ...
	I1209 03:19:21.120691  294888 start.go:242] waiting for startup goroutines ...
	I1209 03:19:21.120701  294888 start.go:247] waiting for cluster config update ...
	I1209 03:19:21.120712  294888 start.go:256] writing updated cluster config ...
	I1209 03:19:21.121158  294888 ssh_runner.go:195] Run: rm -f paused
	I1209 03:19:21.130333  294888 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 03:19:21.131462  294888 kapi.go:59] client config for pause-739105: &rest.Config{Host:"https://192.168.72.124:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-254936/.minikube/profiles/pause-739105/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-254936/.minikube/profiles/pause-739105/client.key", CAFile:"/home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[
]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28162e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 03:19:21.135318  294888 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pt698" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:21.141132  294888 pod_ready.go:94] pod "coredns-66bc5c9577-pt698" is "Ready"
	I1209 03:19:21.141168  294888 pod_ready.go:86] duration metric: took 5.813398ms for pod "coredns-66bc5c9577-pt698" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:21.145211  294888 pod_ready.go:83] waiting for pod "etcd-pause-739105" in "kube-system" namespace to be "Ready" or be gone ...
	W1209 03:19:23.154246  294888 pod_ready.go:104] pod "etcd-pause-739105" is not "Ready", error: <nil>
	I1209 03:19:20.784333  295180 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.61.28:22: connect: no route to host
	I1209 03:19:23.785252  295180 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.61.28:22: connect: connection refused
	I1209 03:19:28.033170  295238 start.go:364] duration metric: took 16.807713519s to acquireMachinesLock for "cert-expiration-699833"
	I1209 03:19:28.033218  295238 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:19:28.033224  295238 fix.go:54] fixHost starting: 
	I1209 03:19:28.035810  295238 fix.go:112] recreateIfNeeded on cert-expiration-699833: state=Running err=<nil>
	W1209 03:19:28.035854  295238 fix.go:138] unexpected machine state, will restart: <nil>
	W1209 03:19:23.692434  291970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 03:19:23.692468  291970 logs.go:123] Gathering logs for kube-proxy [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18] ...
	I1209 03:19:23.692485  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:23.736073  291970 logs.go:123] Gathering logs for kube-controller-manager [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50] ...
	I1209 03:19:23.736116  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:23.791444  291970 logs.go:123] Gathering logs for storage-provisioner [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9] ...
	I1209 03:19:23.791491  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:23.835308  291970 logs.go:123] Gathering logs for kubelet ...
	I1209 03:19:23.835348  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:19:23.955906  291970 logs.go:123] Gathering logs for kube-apiserver [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb] ...
	I1209 03:19:23.955948  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:24.008767  291970 logs.go:123] Gathering logs for etcd [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38] ...
	I1209 03:19:24.008820  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:24.063094  291970 logs.go:123] Gathering logs for coredns [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb] ...
	I1209 03:19:24.063133  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:24.113254  291970 logs.go:123] Gathering logs for kube-scheduler [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8] ...
	I1209 03:19:24.113306  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:24.221772  291970 logs.go:123] Gathering logs for kube-scheduler [3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81] ...
	I1209 03:19:24.221841  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:24.275066  291970 logs.go:123] Gathering logs for CRI-O ...
	I1209 03:19:24.275107  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 03:19:24.604703  291970 logs.go:123] Gathering logs for container status ...
	I1209 03:19:24.604744  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:19:27.159010  291970 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I1209 03:19:27.159626  291970 api_server.go:269] stopped: https://192.168.39.194:8443/healthz: Get "https://192.168.39.194:8443/healthz": dial tcp 192.168.39.194:8443: connect: connection refused
	I1209 03:19:27.159688  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 03:19:27.159752  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 03:19:27.207363  291970 cri.go:89] found id: "9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:27.207390  291970 cri.go:89] found id: ""
	I1209 03:19:27.207401  291970 logs.go:282] 1 containers: [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb]
	I1209 03:19:27.207474  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:27.212361  291970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 03:19:27.212438  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 03:19:27.256254  291970 cri.go:89] found id: "ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:27.256284  291970 cri.go:89] found id: ""
	I1209 03:19:27.256298  291970 logs.go:282] 1 containers: [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38]
	I1209 03:19:27.256372  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:27.262300  291970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 03:19:27.262412  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 03:19:27.313414  291970 cri.go:89] found id: "02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:27.313451  291970 cri.go:89] found id: ""
	I1209 03:19:27.313462  291970 logs.go:282] 1 containers: [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb]
	I1209 03:19:27.313539  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:27.326377  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 03:19:27.326479  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 03:19:27.375400  291970 cri.go:89] found id: "252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:27.375425  291970 cri.go:89] found id: "3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:27.375429  291970 cri.go:89] found id: ""
	I1209 03:19:27.375436  291970 logs.go:282] 2 containers: [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81]
	I1209 03:19:27.375516  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:27.380383  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:27.385022  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 03:19:27.385117  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 03:19:27.428243  291970 cri.go:89] found id: "7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:27.428276  291970 cri.go:89] found id: ""
	I1209 03:19:27.428295  291970 logs.go:282] 1 containers: [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18]
	I1209 03:19:27.428374  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:27.434721  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 03:19:27.434821  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 03:19:27.485802  291970 cri.go:89] found id: "6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:27.485849  291970 cri.go:89] found id: ""
	I1209 03:19:27.485866  291970 logs.go:282] 1 containers: [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50]
	I1209 03:19:27.485947  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:27.491916  291970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 03:19:27.492019  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 03:19:27.539186  291970 cri.go:89] found id: ""
	I1209 03:19:27.539231  291970 logs.go:282] 0 containers: []
	W1209 03:19:27.539242  291970 logs.go:284] No container was found matching "kindnet"
	I1209 03:19:27.539248  291970 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 03:19:27.539315  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 03:19:27.589996  291970 cri.go:89] found id: "3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:27.590027  291970 cri.go:89] found id: ""
	I1209 03:19:27.590039  291970 logs.go:282] 1 containers: [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9]
	I1209 03:19:27.590113  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:27.595183  291970 logs.go:123] Gathering logs for etcd [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38] ...
	I1209 03:19:27.595218  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:27.646336  291970 logs.go:123] Gathering logs for coredns [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb] ...
	I1209 03:19:27.646372  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:27.694252  291970 logs.go:123] Gathering logs for kube-scheduler [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8] ...
	I1209 03:19:27.694298  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:27.779189  291970 logs.go:123] Gathering logs for kube-proxy [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18] ...
	I1209 03:19:27.779235  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:27.825991  291970 logs.go:123] Gathering logs for storage-provisioner [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9] ...
	I1209 03:19:27.826027  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:27.882236  291970 logs.go:123] Gathering logs for CRI-O ...
	I1209 03:19:27.882269  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 03:19:28.278459  291970 logs.go:123] Gathering logs for container status ...
	I1209 03:19:28.278520  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:19:28.345959  291970 logs.go:123] Gathering logs for kubelet ...
	I1209 03:19:28.346005  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:19:28.469799  291970 logs.go:123] Gathering logs for dmesg ...
	I1209 03:19:28.469852  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:19:28.490072  291970 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:19:28.490117  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 03:19:28.580525  291970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 03:19:28.580560  291970 logs.go:123] Gathering logs for kube-scheduler [3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81] ...
	I1209 03:19:28.580578  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:28.648093  291970 logs.go:123] Gathering logs for kube-controller-manager [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50] ...
	I1209 03:19:28.648157  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	W1209 03:19:25.650979  294888 pod_ready.go:104] pod "etcd-pause-739105" is not "Ready", error: <nil>
	W1209 03:19:27.652750  294888 pod_ready.go:104] pod "etcd-pause-739105" is not "Ready", error: <nil>
	I1209 03:19:26.891420  295180 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 03:19:26.895080  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:26.895735  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:26.895760  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:26.896063  295180 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/stopped-upgrade-644254/config.json ...
	I1209 03:19:26.896323  295180 machine.go:94] provisionDockerMachine start ...
	I1209 03:19:26.899066  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:26.899546  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:26.899574  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:26.899810  295180 main.go:143] libmachine: Using SSH client type: native
	I1209 03:19:26.900098  295180 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I1209 03:19:26.900111  295180 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 03:19:27.005259  295180 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 03:19:27.005290  295180 buildroot.go:166] provisioning hostname "stopped-upgrade-644254"
	I1209 03:19:27.008437  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.008946  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:27.008991  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.009188  295180 main.go:143] libmachine: Using SSH client type: native
	I1209 03:19:27.009459  295180 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I1209 03:19:27.009476  295180 main.go:143] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-644254 && echo "stopped-upgrade-644254" | sudo tee /etc/hostname
	I1209 03:19:27.131588  295180 main.go:143] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-644254
	
	I1209 03:19:27.134949  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.135364  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:27.135398  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.135696  295180 main.go:143] libmachine: Using SSH client type: native
	I1209 03:19:27.136043  295180 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I1209 03:19:27.136073  295180 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-644254' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-644254/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-644254' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 03:19:27.262416  295180 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 03:19:27.262448  295180 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22081-254936/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-254936/.minikube}
	I1209 03:19:27.262496  295180 buildroot.go:174] setting up certificates
	I1209 03:19:27.262509  295180 provision.go:84] configureAuth start
	I1209 03:19:27.266015  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.266594  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:27.266630  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.269282  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.269684  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:27.269710  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.269919  295180 provision.go:143] copyHostCerts
	I1209 03:19:27.270005  295180 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-254936/.minikube/ca.pem, removing ...
	I1209 03:19:27.270020  295180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-254936/.minikube/ca.pem
	I1209 03:19:27.270098  295180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/ca.pem (1078 bytes)
	I1209 03:19:27.270209  295180 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-254936/.minikube/cert.pem, removing ...
	I1209 03:19:27.270221  295180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-254936/.minikube/cert.pem
	I1209 03:19:27.270251  295180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/cert.pem (1123 bytes)
	I1209 03:19:27.270313  295180 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-254936/.minikube/key.pem, removing ...
	I1209 03:19:27.270323  295180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-254936/.minikube/key.pem
	I1209 03:19:27.270346  295180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/key.pem (1679 bytes)
	I1209 03:19:27.270391  295180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-644254 san=[127.0.0.1 192.168.61.28 localhost minikube stopped-upgrade-644254]
	I1209 03:19:27.316292  295180 provision.go:177] copyRemoteCerts
	I1209 03:19:27.316389  295180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 03:19:27.320013  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.320543  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:27.320570  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.320774  295180 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/stopped-upgrade-644254/id_rsa Username:docker}
	I1209 03:19:27.404703  295180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 03:19:27.436056  295180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 03:19:27.468583  295180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 03:19:27.503130  295180 provision.go:87] duration metric: took 240.60415ms to configureAuth
	I1209 03:19:27.503164  295180 buildroot.go:189] setting minikube options for container-runtime
	I1209 03:19:27.503418  295180 config.go:182] Loaded profile config "stopped-upgrade-644254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1209 03:19:27.506694  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.507111  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:27.507146  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.507363  295180 main.go:143] libmachine: Using SSH client type: native
	I1209 03:19:27.507616  295180 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I1209 03:19:27.507631  295180 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 03:19:27.767173  295180 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 03:19:27.767204  295180 machine.go:97] duration metric: took 870.864436ms to provisionDockerMachine
	I1209 03:19:27.767224  295180 start.go:293] postStartSetup for "stopped-upgrade-644254" (driver="kvm2")
	I1209 03:19:27.767236  295180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 03:19:27.767312  295180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 03:19:27.770491  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.770908  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:27.770948  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.771131  295180 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/stopped-upgrade-644254/id_rsa Username:docker}
	I1209 03:19:27.864490  295180 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 03:19:27.870280  295180 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 03:19:27.870321  295180 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-254936/.minikube/addons for local assets ...
	I1209 03:19:27.870409  295180 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-254936/.minikube/files for local assets ...
	I1209 03:19:27.870488  295180 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-254936/.minikube/files/etc/ssl/certs/2588542.pem -> 2588542.pem in /etc/ssl/certs
	I1209 03:19:27.870612  295180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 03:19:27.882216  295180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/files/etc/ssl/certs/2588542.pem --> /etc/ssl/certs/2588542.pem (1708 bytes)
	I1209 03:19:27.919766  295180 start.go:296] duration metric: took 152.522264ms for postStartSetup
	I1209 03:19:27.919839  295180 fix.go:56] duration metric: took 17.841519475s for fixHost
	I1209 03:19:27.923466  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.923967  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:27.924018  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.924320  295180 main.go:143] libmachine: Using SSH client type: native
	I1209 03:19:27.924683  295180 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I1209 03:19:27.924705  295180 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1209 03:19:28.032970  295180 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765250367.994660283
	
	I1209 03:19:28.033002  295180 fix.go:216] guest clock: 1765250367.994660283
	I1209 03:19:28.033013  295180 fix.go:229] Guest: 2025-12-09 03:19:27.994660283 +0000 UTC Remote: 2025-12-09 03:19:27.919846532 +0000 UTC m=+17.963406645 (delta=74.813751ms)
	I1209 03:19:28.033038  295180 fix.go:200] guest clock delta is within tolerance: 74.813751ms
	I1209 03:19:28.033045  295180 start.go:83] releasing machines lock for "stopped-upgrade-644254", held for 17.954752752s
	I1209 03:19:28.036810  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:28.037332  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:28.037358  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:28.037987  295180 ssh_runner.go:195] Run: cat /version.json
	I1209 03:19:28.038076  295180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 03:19:28.042023  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:28.042120  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:28.042528  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:28.042560  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:28.042617  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:28.042649  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:28.043104  295180 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/stopped-upgrade-644254/id_rsa Username:docker}
	I1209 03:19:28.043343  295180 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/stopped-upgrade-644254/id_rsa Username:docker}
	W1209 03:19:28.145480  295180 out.go:285] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.35.0 -> Actual minikube version: v1.37.0
	I1209 03:19:28.145576  295180 ssh_runner.go:195] Run: systemctl --version
	I1209 03:19:28.154034  295180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 03:19:28.312741  295180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 03:19:28.321956  295180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 03:19:28.322042  295180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 03:19:28.347140  295180 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 03:19:28.347187  295180 start.go:496] detecting cgroup driver to use...
	I1209 03:19:28.347279  295180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 03:19:28.371115  295180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 03:19:28.390135  295180 docker.go:218] disabling cri-docker service (if available) ...
	I1209 03:19:28.390229  295180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 03:19:28.406773  295180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 03:19:28.422353  295180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 03:19:28.566837  295180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 03:19:28.757321  295180 docker.go:234] disabling docker service ...
	I1209 03:19:28.757430  295180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 03:19:28.776126  295180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 03:19:28.792061  295180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 03:19:28.930819  295180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 03:19:29.088606  295180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 03:19:29.108952  295180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 03:19:29.137429  295180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 03:19:29.137516  295180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:19:29.155693  295180 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 03:19:29.155795  295180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:19:29.169749  295180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:19:29.184644  295180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:19:29.199435  295180 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 03:19:29.213973  295180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:19:29.226356  295180 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:19:29.246914  295180 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:19:29.259864  295180 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 03:19:29.271052  295180 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 03:19:29.271120  295180 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 03:19:29.286170  295180 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 03:19:29.297596  295180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:19:29.421906  295180 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 03:19:29.524586  295180 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 03:19:29.524677  295180 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 03:19:29.529841  295180 start.go:564] Will wait 60s for crictl version
	I1209 03:19:29.529933  295180 ssh_runner.go:195] Run: which crictl
	I1209 03:19:29.534354  295180 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 03:19:29.578780  295180 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 03:19:29.578918  295180 ssh_runner.go:195] Run: crio --version
	I1209 03:19:29.615258  295180 ssh_runner.go:195] Run: crio --version
	I1209 03:19:29.651515  295180 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1209 03:19:29.655817  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:29.656239  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:29.656265  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:29.656471  295180 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1209 03:19:29.661216  295180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 03:19:29.675289  295180 kubeadm.go:884] updating cluster {Name:stopped-upgrade-644254 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:s
topped-upgrade-644254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 03:19:29.675432  295180 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1209 03:19:29.675491  295180 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 03:19:29.720138  295180 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1209 03:19:29.720215  295180 ssh_runner.go:195] Run: which lz4
	I1209 03:19:29.724787  295180 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 03:19:29.729357  295180 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 03:19:29.729396  295180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1209 03:19:28.037516  295238 out.go:252] * Updating the running kvm2 "cert-expiration-699833" VM ...
	I1209 03:19:28.037542  295238 machine.go:94] provisionDockerMachine start ...
	I1209 03:19:28.041675  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.042423  295238 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e7:a7:4d", ip: ""} in network mk-cert-expiration-699833: {Iface:virbr2 ExpiryTime:2025-12-09 04:15:44 +0000 UTC Type:0 Mac:52:54:00:e7:a7:4d Iaid: IPaddr:192.168.50.113 Prefix:24 Hostname:cert-expiration-699833 Clientid:01:52:54:00:e7:a7:4d}
	I1209 03:19:28.042464  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined IP address 192.168.50.113 and MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.042999  295238 main.go:143] libmachine: Using SSH client type: native
	I1209 03:19:28.043358  295238 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.50.113 22 <nil> <nil>}
	I1209 03:19:28.043369  295238 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 03:19:28.174516  295238 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-699833
	
	I1209 03:19:28.174555  295238 buildroot.go:166] provisioning hostname "cert-expiration-699833"
	I1209 03:19:28.178361  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.178867  295238 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e7:a7:4d", ip: ""} in network mk-cert-expiration-699833: {Iface:virbr2 ExpiryTime:2025-12-09 04:15:44 +0000 UTC Type:0 Mac:52:54:00:e7:a7:4d Iaid: IPaddr:192.168.50.113 Prefix:24 Hostname:cert-expiration-699833 Clientid:01:52:54:00:e7:a7:4d}
	I1209 03:19:28.178900  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined IP address 192.168.50.113 and MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.179078  295238 main.go:143] libmachine: Using SSH client type: native
	I1209 03:19:28.179360  295238 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.50.113 22 <nil> <nil>}
	I1209 03:19:28.179368  295238 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-699833 && echo "cert-expiration-699833" | sudo tee /etc/hostname
	I1209 03:19:28.318360  295238 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-699833
	
	I1209 03:19:28.322533  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.323135  295238 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e7:a7:4d", ip: ""} in network mk-cert-expiration-699833: {Iface:virbr2 ExpiryTime:2025-12-09 04:15:44 +0000 UTC Type:0 Mac:52:54:00:e7:a7:4d Iaid: IPaddr:192.168.50.113 Prefix:24 Hostname:cert-expiration-699833 Clientid:01:52:54:00:e7:a7:4d}
	I1209 03:19:28.323185  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined IP address 192.168.50.113 and MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.323457  295238 main.go:143] libmachine: Using SSH client type: native
	I1209 03:19:28.323716  295238 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.50.113 22 <nil> <nil>}
	I1209 03:19:28.323728  295238 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-699833' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-699833/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-699833' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 03:19:28.448086  295238 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 03:19:28.448121  295238 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22081-254936/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-254936/.minikube}
	I1209 03:19:28.448149  295238 buildroot.go:174] setting up certificates
	I1209 03:19:28.448163  295238 provision.go:84] configureAuth start
	I1209 03:19:28.452071  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.452610  295238 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e7:a7:4d", ip: ""} in network mk-cert-expiration-699833: {Iface:virbr2 ExpiryTime:2025-12-09 04:15:44 +0000 UTC Type:0 Mac:52:54:00:e7:a7:4d Iaid: IPaddr:192.168.50.113 Prefix:24 Hostname:cert-expiration-699833 Clientid:01:52:54:00:e7:a7:4d}
	I1209 03:19:28.452631  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined IP address 192.168.50.113 and MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.455624  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.456017  295238 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e7:a7:4d", ip: ""} in network mk-cert-expiration-699833: {Iface:virbr2 ExpiryTime:2025-12-09 04:15:44 +0000 UTC Type:0 Mac:52:54:00:e7:a7:4d Iaid: IPaddr:192.168.50.113 Prefix:24 Hostname:cert-expiration-699833 Clientid:01:52:54:00:e7:a7:4d}
	I1209 03:19:28.456048  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined IP address 192.168.50.113 and MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.456197  295238 provision.go:143] copyHostCerts
	I1209 03:19:28.456281  295238 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-254936/.minikube/cert.pem, removing ...
	I1209 03:19:28.456290  295238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-254936/.minikube/cert.pem
	I1209 03:19:28.456370  295238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/cert.pem (1123 bytes)
	I1209 03:19:28.456474  295238 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-254936/.minikube/key.pem, removing ...
	I1209 03:19:28.456478  295238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-254936/.minikube/key.pem
	I1209 03:19:28.456499  295238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/key.pem (1679 bytes)
	I1209 03:19:28.456556  295238 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-254936/.minikube/ca.pem, removing ...
	I1209 03:19:28.456559  295238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-254936/.minikube/ca.pem
	I1209 03:19:28.456575  295238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/ca.pem (1078 bytes)
	I1209 03:19:28.456623  295238 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-699833 san=[127.0.0.1 192.168.50.113 cert-expiration-699833 localhost minikube]
	I1209 03:19:28.720780  295238 provision.go:177] copyRemoteCerts
	I1209 03:19:28.720840  295238 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 03:19:28.724308  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.724792  295238 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e7:a7:4d", ip: ""} in network mk-cert-expiration-699833: {Iface:virbr2 ExpiryTime:2025-12-09 04:15:44 +0000 UTC Type:0 Mac:52:54:00:e7:a7:4d Iaid: IPaddr:192.168.50.113 Prefix:24 Hostname:cert-expiration-699833 Clientid:01:52:54:00:e7:a7:4d}
	I1209 03:19:28.724811  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined IP address 192.168.50.113 and MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.724967  295238 sshutil.go:53] new ssh client: &{IP:192.168.50.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/cert-expiration-699833/id_rsa Username:docker}
	I1209 03:19:28.821073  295238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 03:19:28.858990  295238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 03:19:28.897536  295238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 03:19:28.940865  295238 provision.go:87] duration metric: took 492.684984ms to configureAuth
	I1209 03:19:28.940890  295238 buildroot.go:189] setting minikube options for container-runtime
	I1209 03:19:28.941086  295238 config.go:182] Loaded profile config "cert-expiration-699833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 03:19:28.944145  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.944606  295238 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e7:a7:4d", ip: ""} in network mk-cert-expiration-699833: {Iface:virbr2 ExpiryTime:2025-12-09 04:15:44 +0000 UTC Type:0 Mac:52:54:00:e7:a7:4d Iaid: IPaddr:192.168.50.113 Prefix:24 Hostname:cert-expiration-699833 Clientid:01:52:54:00:e7:a7:4d}
	I1209 03:19:28.944639  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined IP address 192.168.50.113 and MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.944821  295238 main.go:143] libmachine: Using SSH client type: native
	I1209 03:19:28.945144  295238 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.50.113 22 <nil> <nil>}
	I1209 03:19:28.945160  295238 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1209 03:19:29.653489  294888 pod_ready.go:104] pod "etcd-pause-739105" is not "Ready", error: <nil>
	W1209 03:19:32.154375  294888 pod_ready.go:104] pod "etcd-pause-739105" is not "Ready", error: <nil>
	I1209 03:19:32.654169  294888 pod_ready.go:94] pod "etcd-pause-739105" is "Ready"
	I1209 03:19:32.654205  294888 pod_ready.go:86] duration metric: took 11.508963618s for pod "etcd-pause-739105" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:32.658482  294888 pod_ready.go:83] waiting for pod "kube-apiserver-pause-739105" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:32.665456  294888 pod_ready.go:94] pod "kube-apiserver-pause-739105" is "Ready"
	I1209 03:19:32.665493  294888 pod_ready.go:86] duration metric: took 6.969874ms for pod "kube-apiserver-pause-739105" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:32.668999  294888 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-739105" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:32.677052  294888 pod_ready.go:94] pod "kube-controller-manager-pause-739105" is "Ready"
	I1209 03:19:32.677090  294888 pod_ready.go:86] duration metric: took 8.053977ms for pod "kube-controller-manager-pause-739105" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:32.679956  294888 pod_ready.go:83] waiting for pod "kube-proxy-rxfdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:32.850259  294888 pod_ready.go:94] pod "kube-proxy-rxfdq" is "Ready"
	I1209 03:19:32.850290  294888 pod_ready.go:86] duration metric: took 170.298804ms for pod "kube-proxy-rxfdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:33.050566  294888 pod_ready.go:83] waiting for pod "kube-scheduler-pause-739105" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:33.450718  294888 pod_ready.go:94] pod "kube-scheduler-pause-739105" is "Ready"
	I1209 03:19:33.450762  294888 pod_ready.go:86] duration metric: took 400.159535ms for pod "kube-scheduler-pause-739105" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:33.450782  294888 pod_ready.go:40] duration metric: took 12.320406455s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 03:19:33.510365  294888 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1209 03:19:33.512044  294888 out.go:179] * Done! kubectl is now configured to use "pause-739105" cluster and "default" namespace by default
	I1209 03:19:28.694991  291970 logs.go:123] Gathering logs for kube-apiserver [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb] ...
	I1209 03:19:28.695031  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:31.243987  291970 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I1209 03:19:31.244692  291970 api_server.go:269] stopped: https://192.168.39.194:8443/healthz: Get "https://192.168.39.194:8443/healthz": dial tcp 192.168.39.194:8443: connect: connection refused
	I1209 03:19:31.244781  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 03:19:31.244875  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 03:19:31.302919  291970 cri.go:89] found id: "9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:31.302950  291970 cri.go:89] found id: ""
	I1209 03:19:31.302961  291970 logs.go:282] 1 containers: [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb]
	I1209 03:19:31.303036  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:31.308153  291970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 03:19:31.308252  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 03:19:31.369995  291970 cri.go:89] found id: "ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:31.370023  291970 cri.go:89] found id: ""
	I1209 03:19:31.370034  291970 logs.go:282] 1 containers: [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38]
	I1209 03:19:31.370110  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:31.375556  291970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 03:19:31.375650  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 03:19:31.425376  291970 cri.go:89] found id: "02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:31.425410  291970 cri.go:89] found id: ""
	I1209 03:19:31.425422  291970 logs.go:282] 1 containers: [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb]
	I1209 03:19:31.425502  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:31.431172  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 03:19:31.431367  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 03:19:31.490166  291970 cri.go:89] found id: "252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:31.490195  291970 cri.go:89] found id: "3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:31.490201  291970 cri.go:89] found id: ""
	I1209 03:19:31.490210  291970 logs.go:282] 2 containers: [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81]
	I1209 03:19:31.490284  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:31.495223  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:31.499959  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 03:19:31.500043  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 03:19:31.545106  291970 cri.go:89] found id: "7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:31.545130  291970 cri.go:89] found id: ""
	I1209 03:19:31.545138  291970 logs.go:282] 1 containers: [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18]
	I1209 03:19:31.545201  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:31.550087  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 03:19:31.550163  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 03:19:31.592966  291970 cri.go:89] found id: "6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:31.592995  291970 cri.go:89] found id: ""
	I1209 03:19:31.593004  291970 logs.go:282] 1 containers: [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50]
	I1209 03:19:31.593064  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:31.599248  291970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 03:19:31.599329  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 03:19:31.649102  291970 cri.go:89] found id: ""
	I1209 03:19:31.649136  291970 logs.go:282] 0 containers: []
	W1209 03:19:31.649148  291970 logs.go:284] No container was found matching "kindnet"
	I1209 03:19:31.649156  291970 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 03:19:31.649230  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 03:19:31.700123  291970 cri.go:89] found id: "3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:31.700146  291970 cri.go:89] found id: ""
	I1209 03:19:31.700154  291970 logs.go:282] 1 containers: [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9]
	I1209 03:19:31.700211  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:31.704852  291970 logs.go:123] Gathering logs for coredns [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb] ...
	I1209 03:19:31.704885  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:31.746182  291970 logs.go:123] Gathering logs for kube-scheduler [3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81] ...
	I1209 03:19:31.746231  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:31.804476  291970 logs.go:123] Gathering logs for kube-proxy [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18] ...
	I1209 03:19:31.804521  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:31.861938  291970 logs.go:123] Gathering logs for CRI-O ...
	I1209 03:19:31.861979  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 03:19:32.189141  291970 logs.go:123] Gathering logs for container status ...
	I1209 03:19:32.189181  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:19:32.248192  291970 logs.go:123] Gathering logs for kube-apiserver [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb] ...
	I1209 03:19:32.248223  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:32.299693  291970 logs.go:123] Gathering logs for etcd [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38] ...
	I1209 03:19:32.299726  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:32.354632  291970 logs.go:123] Gathering logs for kube-scheduler [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8] ...
	I1209 03:19:32.354682  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:32.444759  291970 logs.go:123] Gathering logs for kube-controller-manager [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50] ...
	I1209 03:19:32.444802  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:32.491124  291970 logs.go:123] Gathering logs for storage-provisioner [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9] ...
	I1209 03:19:32.491158  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:32.531800  291970 logs.go:123] Gathering logs for kubelet ...
	I1209 03:19:32.531851  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:19:32.651416  291970 logs.go:123] Gathering logs for dmesg ...
	I1209 03:19:32.651457  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:19:32.672034  291970 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:19:32.672081  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 03:19:32.777120  291970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	
	
	==> CRI-O <==
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.269498937Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765250374269472265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e8de2ef-4822-4d7f-8a4b-3352cc6147c5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.270622240Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dfdbbd49-66e8-4266-ba37-6cdec4b1a9bc name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.270760579Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dfdbbd49-66e8-4266-ba37-6cdec4b1a9bc name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.271024649Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c8e068718377187d3b4b28e5adbf9015357aa760172aa9183c59e09e14d2968b,PodSandboxId:482099d32a99b56188357804fae68331d845e1c8a808e83ddd7cabdb87e9dfbe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765250359017539352,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxfdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6d4576-8e92-4abd-8193-d8b9ddd7266d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc752c9728b3b332d395aa59842764f04d4caa40df28f144b683004557327f2,PodSandboxId:c55a751d02827c0d0f3788a23c6c21ce67c22da8309334cc84c158afcb60b96e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765250355273083336,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e02b020db785bb074900f6975d3f97,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":1025
7,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4a0bac8a98646d5c29d58b036502fcf131af10f10fa52995741cab94b0da2a1,PodSandboxId:0f8213b341c4e30564aa9565d039e4258055d12695daac0b882de811e7131527,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765250355249459598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0bbece84dc5cd4f438fa4e2374c1ea,},Annotations:map[string]string{io.kuber
netes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbbae70353466efdc8393fadfdacf6a86580e99a587163778a008b33062df1d6,PodSandboxId:0ac8be03669c39343dfef2cd9ebf9e0f8ac28cfebe2b68a710e6c8cba13e8c3b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765250355211668801,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739105,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e199ec6a0c235ca384a8aa2beb74b691,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb0e6d605cf6597e05179b758b729785cd27679b39c9bd63286c60feb85c8bf,PodSandboxId:68ee0f3f8016a7d9d60641a347822503a1249f2f0e4c89ba97376f34db491f1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765250355196871886,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bae2024872c09e08f41c6347be77829b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181d9c9b69dd99c98ab2b6e3a5cafeeeaf2e38e62616b8475b3d33316dd1944,PodSandboxId:c4989bf8fc2b1d4911c4a37cb3968b828e9b97ec217f1a3354e1973ace713fba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17652
50338139497040,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pt698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79e9e39-615a-4e96-afd4-3b7e856cc3f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ae140d28849706ed992a697e3d674291e73d88d78d98be55c0ec624c7bfa79,PodSandboxId:482099d32a99b56188357804fae68331d845e1c8a808e8
3ddd7cabdb87e9dfbe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765250337055678970,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxfdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6d4576-8e92-4abd-8193-d8b9ddd7266d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7292ca8714180c12d0930642833f614c66bde369219707289a884965f99ae3,PodSandboxId:0f8213b341c4e30564aa9565d039e4258055d12695daac0b882de811e7131527,Metadata:&Conta
inerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765250336964797509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0bbece84dc5cd4f438fa4e2374c1ea,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e129d74e7c8c6290797c64aa8bec8b1aff4105e7c26698b9f7ab
a0e9db897f,PodSandboxId:0ac8be03669c39343dfef2cd9ebf9e0f8ac28cfebe2b68a710e6c8cba13e8c3b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765250336959936499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e199ec6a0c235ca384a8aa2beb74b691,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:4f488569925dc632f4dd1ca1b39df71b05e838a45e325da6c9ccc93996ce130c,PodSandboxId:68ee0f3f8016a7d9d60641a347822503a1249f2f0e4c89ba97376f34db491f1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765250336915222577,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bae2024872c09e08f41c6347be77829b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:026aaeb74336572baaaf4524f3103e9e18100483e64abfe1932d87eab57a79dc,PodSandboxId:c55a751d02827c0d0f3788a23c6c21ce67c22da8309334cc84c158afcb60b96e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765250336877876907,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e02b020db785bb074900f6975d3f97,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14444bc2d3af3e65b99e16ca7ed91add6e58b282592f24742c4b426c0748bf39,PodSandboxId:ebfcc6a20b38cdbb939e5982f88a8c4c79b0f242846aab771d33ab22e6261517,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765250280334320922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pt698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79e9e39-615a-4e96-afd4-3b7e856cc3f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dfdbbd49-66e8-4266-ba37-6cdec4b1a9bc name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.323214735Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=13c177c0-a90f-4c77-9d0b-bdfd15ebe28d name=/runtime.v1.RuntimeService/Version
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.323693651Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=13c177c0-a90f-4c77-9d0b-bdfd15ebe28d name=/runtime.v1.RuntimeService/Version
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.326564571Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bd6c246f-c57b-4513-babe-b676274e86c4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.327171319Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765250374327140701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd6c246f-c57b-4513-babe-b676274e86c4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.328586464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63db25b6-5420-4d66-a9ba-d9780e37e1b5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.328642586Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63db25b6-5420-4d66-a9ba-d9780e37e1b5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.328938662Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c8e068718377187d3b4b28e5adbf9015357aa760172aa9183c59e09e14d2968b,PodSandboxId:482099d32a99b56188357804fae68331d845e1c8a808e83ddd7cabdb87e9dfbe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765250359017539352,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxfdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6d4576-8e92-4abd-8193-d8b9ddd7266d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc752c9728b3b332d395aa59842764f04d4caa40df28f144b683004557327f2,PodSandboxId:c55a751d02827c0d0f3788a23c6c21ce67c22da8309334cc84c158afcb60b96e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765250355273083336,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e02b020db785bb074900f6975d3f97,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":1025
7,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4a0bac8a98646d5c29d58b036502fcf131af10f10fa52995741cab94b0da2a1,PodSandboxId:0f8213b341c4e30564aa9565d039e4258055d12695daac0b882de811e7131527,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765250355249459598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0bbece84dc5cd4f438fa4e2374c1ea,},Annotations:map[string]string{io.kuber
netes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbbae70353466efdc8393fadfdacf6a86580e99a587163778a008b33062df1d6,PodSandboxId:0ac8be03669c39343dfef2cd9ebf9e0f8ac28cfebe2b68a710e6c8cba13e8c3b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765250355211668801,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739105,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e199ec6a0c235ca384a8aa2beb74b691,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb0e6d605cf6597e05179b758b729785cd27679b39c9bd63286c60feb85c8bf,PodSandboxId:68ee0f3f8016a7d9d60641a347822503a1249f2f0e4c89ba97376f34db491f1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765250355196871886,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bae2024872c09e08f41c6347be77829b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181d9c9b69dd99c98ab2b6e3a5cafeeeaf2e38e62616b8475b3d33316dd1944,PodSandboxId:c4989bf8fc2b1d4911c4a37cb3968b828e9b97ec217f1a3354e1973ace713fba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17652
50338139497040,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pt698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79e9e39-615a-4e96-afd4-3b7e856cc3f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ae140d28849706ed992a697e3d674291e73d88d78d98be55c0ec624c7bfa79,PodSandboxId:482099d32a99b56188357804fae68331d845e1c8a808e8
3ddd7cabdb87e9dfbe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765250337055678970,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxfdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6d4576-8e92-4abd-8193-d8b9ddd7266d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7292ca8714180c12d0930642833f614c66bde369219707289a884965f99ae3,PodSandboxId:0f8213b341c4e30564aa9565d039e4258055d12695daac0b882de811e7131527,Metadata:&Conta
inerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765250336964797509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0bbece84dc5cd4f438fa4e2374c1ea,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e129d74e7c8c6290797c64aa8bec8b1aff4105e7c26698b9f7ab
a0e9db897f,PodSandboxId:0ac8be03669c39343dfef2cd9ebf9e0f8ac28cfebe2b68a710e6c8cba13e8c3b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765250336959936499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e199ec6a0c235ca384a8aa2beb74b691,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:4f488569925dc632f4dd1ca1b39df71b05e838a45e325da6c9ccc93996ce130c,PodSandboxId:68ee0f3f8016a7d9d60641a347822503a1249f2f0e4c89ba97376f34db491f1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765250336915222577,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bae2024872c09e08f41c6347be77829b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:026aaeb74336572baaaf4524f3103e9e18100483e64abfe1932d87eab57a79dc,PodSandboxId:c55a751d02827c0d0f3788a23c6c21ce67c22da8309334cc84c158afcb60b96e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765250336877876907,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e02b020db785bb074900f6975d3f97,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14444bc2d3af3e65b99e16ca7ed91add6e58b282592f24742c4b426c0748bf39,PodSandboxId:ebfcc6a20b38cdbb939e5982f88a8c4c79b0f242846aab771d33ab22e6261517,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765250280334320922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pt698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79e9e39-615a-4e96-afd4-3b7e856cc3f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=63db25b6-5420-4d66-a9ba-d9780e37e1b5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.385028148Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d9eb488-0fbd-4692-b9b1-454a0d9af8f4 name=/runtime.v1.RuntimeService/Version
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.385133646Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d9eb488-0fbd-4692-b9b1-454a0d9af8f4 name=/runtime.v1.RuntimeService/Version
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.387451382Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98a81fdf-c3e2-4c40-a10c-d6f24aba90fa name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.388366652Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765250374388324533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98a81fdf-c3e2-4c40-a10c-d6f24aba90fa name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.389508919Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81cbdda9-98f0-4598-a97d-2540678d84a8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.389955020Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81cbdda9-98f0-4598-a97d-2540678d84a8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.390971337Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c8e068718377187d3b4b28e5adbf9015357aa760172aa9183c59e09e14d2968b,PodSandboxId:482099d32a99b56188357804fae68331d845e1c8a808e83ddd7cabdb87e9dfbe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765250359017539352,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxfdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6d4576-8e92-4abd-8193-d8b9ddd7266d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc752c9728b3b332d395aa59842764f04d4caa40df28f144b683004557327f2,PodSandboxId:c55a751d02827c0d0f3788a23c6c21ce67c22da8309334cc84c158afcb60b96e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765250355273083336,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e02b020db785bb074900f6975d3f97,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":1025
7,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4a0bac8a98646d5c29d58b036502fcf131af10f10fa52995741cab94b0da2a1,PodSandboxId:0f8213b341c4e30564aa9565d039e4258055d12695daac0b882de811e7131527,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765250355249459598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0bbece84dc5cd4f438fa4e2374c1ea,},Annotations:map[string]string{io.kuber
netes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbbae70353466efdc8393fadfdacf6a86580e99a587163778a008b33062df1d6,PodSandboxId:0ac8be03669c39343dfef2cd9ebf9e0f8ac28cfebe2b68a710e6c8cba13e8c3b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765250355211668801,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739105,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e199ec6a0c235ca384a8aa2beb74b691,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb0e6d605cf6597e05179b758b729785cd27679b39c9bd63286c60feb85c8bf,PodSandboxId:68ee0f3f8016a7d9d60641a347822503a1249f2f0e4c89ba97376f34db491f1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765250355196871886,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bae2024872c09e08f41c6347be77829b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181d9c9b69dd99c98ab2b6e3a5cafeeeaf2e38e62616b8475b3d33316dd1944,PodSandboxId:c4989bf8fc2b1d4911c4a37cb3968b828e9b97ec217f1a3354e1973ace713fba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17652
50338139497040,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pt698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79e9e39-615a-4e96-afd4-3b7e856cc3f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ae140d28849706ed992a697e3d674291e73d88d78d98be55c0ec624c7bfa79,PodSandboxId:482099d32a99b56188357804fae68331d845e1c8a808e8
3ddd7cabdb87e9dfbe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765250337055678970,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxfdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6d4576-8e92-4abd-8193-d8b9ddd7266d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7292ca8714180c12d0930642833f614c66bde369219707289a884965f99ae3,PodSandboxId:0f8213b341c4e30564aa9565d039e4258055d12695daac0b882de811e7131527,Metadata:&Conta
inerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765250336964797509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0bbece84dc5cd4f438fa4e2374c1ea,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e129d74e7c8c6290797c64aa8bec8b1aff4105e7c26698b9f7ab
a0e9db897f,PodSandboxId:0ac8be03669c39343dfef2cd9ebf9e0f8ac28cfebe2b68a710e6c8cba13e8c3b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765250336959936499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e199ec6a0c235ca384a8aa2beb74b691,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:4f488569925dc632f4dd1ca1b39df71b05e838a45e325da6c9ccc93996ce130c,PodSandboxId:68ee0f3f8016a7d9d60641a347822503a1249f2f0e4c89ba97376f34db491f1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765250336915222577,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bae2024872c09e08f41c6347be77829b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:026aaeb74336572baaaf4524f3103e9e18100483e64abfe1932d87eab57a79dc,PodSandboxId:c55a751d02827c0d0f3788a23c6c21ce67c22da8309334cc84c158afcb60b96e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765250336877876907,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e02b020db785bb074900f6975d3f97,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14444bc2d3af3e65b99e16ca7ed91add6e58b282592f24742c4b426c0748bf39,PodSandboxId:ebfcc6a20b38cdbb939e5982f88a8c4c79b0f242846aab771d33ab22e6261517,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765250280334320922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pt698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79e9e39-615a-4e96-afd4-3b7e856cc3f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81cbdda9-98f0-4598-a97d-2540678d84a8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.443986162Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe48f4f5-cc11-4abf-b6f3-dceca0d9c1ae name=/runtime.v1.RuntimeService/Version
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.444084148Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe48f4f5-cc11-4abf-b6f3-dceca0d9c1ae name=/runtime.v1.RuntimeService/Version
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.447425063Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=943c05c6-6f4b-4291-9501-daaa983f256a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.448190409Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765250374448152495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=943c05c6-6f4b-4291-9501-daaa983f256a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.451677429Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=777e82b9-347a-4af6-be12-408a39db2a8a name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.452045923Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=777e82b9-347a-4af6-be12-408a39db2a8a name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:34 pause-739105 crio[2827]: time="2025-12-09 03:19:34.453285293Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c8e068718377187d3b4b28e5adbf9015357aa760172aa9183c59e09e14d2968b,PodSandboxId:482099d32a99b56188357804fae68331d845e1c8a808e83ddd7cabdb87e9dfbe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765250359017539352,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxfdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6d4576-8e92-4abd-8193-d8b9ddd7266d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc752c9728b3b332d395aa59842764f04d4caa40df28f144b683004557327f2,PodSandboxId:c55a751d02827c0d0f3788a23c6c21ce67c22da8309334cc84c158afcb60b96e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765250355273083336,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e02b020db785bb074900f6975d3f97,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":1025
7,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4a0bac8a98646d5c29d58b036502fcf131af10f10fa52995741cab94b0da2a1,PodSandboxId:0f8213b341c4e30564aa9565d039e4258055d12695daac0b882de811e7131527,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765250355249459598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0bbece84dc5cd4f438fa4e2374c1ea,},Annotations:map[string]string{io.kuber
netes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbbae70353466efdc8393fadfdacf6a86580e99a587163778a008b33062df1d6,PodSandboxId:0ac8be03669c39343dfef2cd9ebf9e0f8ac28cfebe2b68a710e6c8cba13e8c3b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765250355211668801,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739105,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e199ec6a0c235ca384a8aa2beb74b691,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb0e6d605cf6597e05179b758b729785cd27679b39c9bd63286c60feb85c8bf,PodSandboxId:68ee0f3f8016a7d9d60641a347822503a1249f2f0e4c89ba97376f34db491f1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765250355196871886,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bae2024872c09e08f41c6347be77829b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181d9c9b69dd99c98ab2b6e3a5cafeeeaf2e38e62616b8475b3d33316dd1944,PodSandboxId:c4989bf8fc2b1d4911c4a37cb3968b828e9b97ec217f1a3354e1973ace713fba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17652
50338139497040,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pt698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79e9e39-615a-4e96-afd4-3b7e856cc3f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ae140d28849706ed992a697e3d674291e73d88d78d98be55c0ec624c7bfa79,PodSandboxId:482099d32a99b56188357804fae68331d845e1c8a808e8
3ddd7cabdb87e9dfbe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765250337055678970,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxfdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6d4576-8e92-4abd-8193-d8b9ddd7266d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7292ca8714180c12d0930642833f614c66bde369219707289a884965f99ae3,PodSandboxId:0f8213b341c4e30564aa9565d039e4258055d12695daac0b882de811e7131527,Metadata:&Conta
inerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765250336964797509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0bbece84dc5cd4f438fa4e2374c1ea,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e129d74e7c8c6290797c64aa8bec8b1aff4105e7c26698b9f7ab
a0e9db897f,PodSandboxId:0ac8be03669c39343dfef2cd9ebf9e0f8ac28cfebe2b68a710e6c8cba13e8c3b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765250336959936499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e199ec6a0c235ca384a8aa2beb74b691,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:4f488569925dc632f4dd1ca1b39df71b05e838a45e325da6c9ccc93996ce130c,PodSandboxId:68ee0f3f8016a7d9d60641a347822503a1249f2f0e4c89ba97376f34db491f1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765250336915222577,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bae2024872c09e08f41c6347be77829b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:026aaeb74336572baaaf4524f3103e9e18100483e64abfe1932d87eab57a79dc,PodSandboxId:c55a751d02827c0d0f3788a23c6c21ce67c22da8309334cc84c158afcb60b96e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765250336877876907,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e02b020db785bb074900f6975d3f97,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14444bc2d3af3e65b99e16ca7ed91add6e58b282592f24742c4b426c0748bf39,PodSandboxId:ebfcc6a20b38cdbb939e5982f88a8c4c79b0f242846aab771d33ab22e6261517,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765250280334320922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pt698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79e9e39-615a-4e96-afd4-3b7e856cc3f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=777e82b9-347a-4af6-be12-408a39db2a8a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	c8e0687183771       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   15 seconds ago       Running             kube-proxy                2                   482099d32a99b       kube-proxy-rxfdq                       kube-system
	bfc752c9728b3       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   19 seconds ago       Running             kube-controller-manager   2                   c55a751d02827       kube-controller-manager-pause-739105   kube-system
	a4a0bac8a9864       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   19 seconds ago       Running             kube-scheduler            2                   0f8213b341c4e       kube-scheduler-pause-739105            kube-system
	dbbae70353466       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   19 seconds ago       Running             kube-apiserver            2                   0ac8be03669c3       kube-apiserver-pause-739105            kube-system
	cfb0e6d605cf6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   19 seconds ago       Running             etcd                      2                   68ee0f3f8016a       etcd-pause-739105                      kube-system
	6181d9c9b69dd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   36 seconds ago       Running             coredns                   1                   c4989bf8fc2b1       coredns-66bc5c9577-pt698               kube-system
	a7ae140d28849       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   37 seconds ago       Exited              kube-proxy                1                   482099d32a99b       kube-proxy-rxfdq                       kube-system
	de7292ca87141       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   37 seconds ago       Exited              kube-scheduler            1                   0f8213b341c4e       kube-scheduler-pause-739105            kube-system
	d4e129d74e7c8       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   37 seconds ago       Exited              kube-apiserver            1                   0ac8be03669c3       kube-apiserver-pause-739105            kube-system
	4f488569925dc       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   37 seconds ago       Exited              etcd                      1                   68ee0f3f8016a       etcd-pause-739105                      kube-system
	026aaeb743365       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   37 seconds ago       Exited              kube-controller-manager   1                   c55a751d02827       kube-controller-manager-pause-739105   kube-system
	14444bc2d3af3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   ebfcc6a20b38c       coredns-66bc5c9577-pt698               kube-system
	
	
	==> coredns [14444bc2d3af3e65b99e16ca7ed91add6e58b282592f24742c4b426c0748bf39] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	[INFO] Reloading complete
	[INFO] 127.0.0.1:52019 - 1662 "HINFO IN 7246317530562464401.1012150286828863301. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.035918379s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6181d9c9b69dd99c98ab2b6e3a5cafeeeaf2e38e62616b8475b3d33316dd1944] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42732 - 64164 "HINFO IN 4463350708249044738.2882555661380039538. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.043994581s
	
	
	==> describe nodes <==
	Name:               pause-739105
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-739105
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=pause-739105
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T03_17_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 03:17:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-739105
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 03:19:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 03:19:18 +0000   Tue, 09 Dec 2025 03:17:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 03:19:18 +0000   Tue, 09 Dec 2025 03:17:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 03:19:18 +0000   Tue, 09 Dec 2025 03:17:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 03:19:18 +0000   Tue, 09 Dec 2025 03:17:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.124
	  Hostname:    pause-739105
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 692518152e8f47e4868b516930bda7b7
	  System UUID:                69251815-2e8f-47e4-868b-516930bda7b7
	  Boot ID:                    9cec8cfc-af96-40f1-a394-16001a213c66
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-pt698                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     97s
	  kube-system                 etcd-pause-739105                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         103s
	  kube-system                 kube-apiserver-pause-739105             250m (12%)    0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-controller-manager-pause-739105    200m (10%)    0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-proxy-rxfdq                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-scheduler-pause-739105             100m (5%)     0 (0%)      0 (0%)           0 (0%)         103s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 95s                  kube-proxy       
	  Normal  Starting                 16s                  kube-proxy       
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  111s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  110s (x8 over 111s)  kubelet          Node pause-739105 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s (x8 over 111s)  kubelet          Node pause-739105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s (x7 over 111s)  kubelet          Node pause-739105 status is now: NodeHasSufficientPID
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    102s                 kubelet          Node pause-739105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s                 kubelet          Node pause-739105 status is now: NodeHasSufficientPID
	  Normal  NodeReady                102s                 kubelet          Node pause-739105 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  102s                 kubelet          Node pause-739105 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           98s                  node-controller  Node pause-739105 event: Registered Node pause-739105 in Controller
	  Normal  Starting                 21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)    kubelet          Node pause-739105 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)    kubelet          Node pause-739105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)    kubelet          Node pause-739105 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13s                  node-controller  Node pause-739105 event: Registered Node pause-739105 in Controller
	
	
	==> dmesg <==
	[Dec 9 03:17] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001643] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000418] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.212541] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.101678] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.127671] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.115619] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.143368] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.028333] kauditd_printk_skb: 18 callbacks suppressed
	[Dec 9 03:18] kauditd_printk_skb: 219 callbacks suppressed
	[ +26.707237] kauditd_printk_skb: 38 callbacks suppressed
	[Dec 9 03:19] kauditd_printk_skb: 320 callbacks suppressed
	[  +4.514413] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [4f488569925dc632f4dd1ca1b39df71b05e838a45e325da6c9ccc93996ce130c] <==
	{"level":"warn","ts":"2025-12-09T03:19:00.754551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:00.777009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:00.787207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:00.796160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:00.821160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:00.854825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:00.920101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36194","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-09T03:19:12.017493Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-09T03:19:12.017581Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-739105","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.124:2380"],"advertise-client-urls":["https://192.168.72.124:2379"]}
	{"level":"error","ts":"2025-12-09T03:19:12.017685Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-09T03:19:12.017821Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-09T03:19:12.019961Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T03:19:12.020023Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b73f48baf02853d8","current-leader-member-id":"b73f48baf02853d8"}
	{"level":"info","ts":"2025-12-09T03:19:12.020104Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-09T03:19:12.020140Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-09T03:19:12.020566Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.124:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-09T03:19:12.020645Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.124:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-09T03:19:12.020658Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.124:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-09T03:19:12.020466Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-09T03:19:12.020680Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-09T03:19:12.020688Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T03:19:12.025067Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.72.124:2380"}
	{"level":"error","ts":"2025-12-09T03:19:12.025175Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.124:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T03:19:12.025202Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.72.124:2380"}
	{"level":"info","ts":"2025-12-09T03:19:12.025210Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-739105","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.124:2380"],"advertise-client-urls":["https://192.168.72.124:2379"]}
	
	
	==> etcd [cfb0e6d605cf6597e05179b758b729785cd27679b39c9bd63286c60feb85c8bf] <==
	{"level":"warn","ts":"2025-12-09T03:19:17.345426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.364398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.405838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.432369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.444793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.485005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.502468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.529459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.560805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.570669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.585053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.597837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.607879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.618368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.627437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.641331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.658596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.672999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.685149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.694066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.711672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.725669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.750751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.756996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.865269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35446","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:19:36 up 2 min,  0 users,  load average: 2.18, 0.79, 0.29
	Linux pause-739105 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [d4e129d74e7c8c6290797c64aa8bec8b1aff4105e7c26698b9f7aba0e9db897f] <==
	I1209 03:19:01.910156       1 controller.go:176] quota evaluator worker shutdown
	I1209 03:19:01.910161       1 controller.go:176] quota evaluator worker shutdown
	I1209 03:19:01.910264       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1209 03:19:01.912920       1 repairip.go:246] Shutting down ipallocator-repair-controller
	I1209 03:19:01.913286       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1209 03:19:02.536493       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1209 03:19:02.536536       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1209 03:19:03.535637       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1209 03:19:03.535914       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W1209 03:19:04.536072       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1209 03:19:04.536405       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1209 03:19:05.535547       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1209 03:19:05.536506       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1209 03:19:06.535581       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1209 03:19:06.536185       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1209 03:19:07.536530       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1209 03:19:07.537079       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	E1209 03:19:08.536455       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1209 03:19:08.536602       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W1209 03:19:09.535559       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1209 03:19:09.535651       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1209 03:19:10.536166       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1209 03:19:10.536171       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1209 03:19:11.535831       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1209 03:19:11.536543       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-apiserver [dbbae70353466efdc8393fadfdacf6a86580e99a587163778a008b33062df1d6] <==
	I1209 03:19:18.686374       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1209 03:19:18.686392       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1209 03:19:18.687516       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1209 03:19:18.687580       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1209 03:19:18.690749       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1209 03:19:18.690787       1 policy_source.go:240] refreshing policies
	I1209 03:19:18.686186       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1209 03:19:18.690975       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1209 03:19:18.691058       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1209 03:19:18.691104       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1209 03:19:18.695981       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1209 03:19:18.696078       1 aggregator.go:171] initial CRD sync complete...
	I1209 03:19:18.696087       1 autoregister_controller.go:144] Starting autoregister controller
	I1209 03:19:18.696094       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1209 03:19:18.696098       1 cache.go:39] Caches are synced for autoregister controller
	I1209 03:19:18.696144       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1209 03:19:18.715785       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 03:19:18.776217       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 03:19:19.493606       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 03:19:20.541948       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 03:19:20.630928       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1209 03:19:20.685882       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 03:19:20.695458       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 03:19:22.128699       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 03:19:22.288422       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [026aaeb74336572baaaf4524f3103e9e18100483e64abfe1932d87eab57a79dc] <==
	I1209 03:18:59.048355       1 serving.go:386] Generated self-signed cert in-memory
	I1209 03:19:00.563608       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1209 03:19:00.563665       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 03:19:00.567311       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1209 03:19:00.567477       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1209 03:19:00.569854       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 03:19:00.569797       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	E1209 03:19:11.566839       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.72.124:8443/healthz\": dial tcp 192.168.72.124:8443: connect: connection refused"
	
	
	==> kube-controller-manager [bfc752c9728b3b332d395aa59842764f04d4caa40df28f144b683004557327f2] <==
	I1209 03:19:22.022038       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1209 03:19:22.022532       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1209 03:19:22.023466       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1209 03:19:22.024364       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1209 03:19:22.024460       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 03:19:22.024505       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1209 03:19:22.026969       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1209 03:19:22.031454       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1209 03:19:22.037683       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1209 03:19:22.040490       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1209 03:19:22.040557       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1209 03:19:22.047236       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1209 03:19:22.047250       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 03:19:22.047287       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1209 03:19:22.047297       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1209 03:19:22.050582       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1209 03:19:22.053383       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1209 03:19:22.054094       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1209 03:19:22.068632       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1209 03:19:22.075344       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1209 03:19:22.086673       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 03:19:22.087895       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1209 03:19:22.088021       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1209 03:19:22.088083       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-739105"
	I1209 03:19:22.088164       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [a7ae140d28849706ed992a697e3d674291e73d88d78d98be55c0ec624c7bfa79] <==
	I1209 03:18:58.186656       1 server_linux.go:53] "Using iptables proxy"
	
	
	==> kube-proxy [c8e068718377187d3b4b28e5adbf9015357aa760172aa9183c59e09e14d2968b] <==
	I1209 03:19:19.233394       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 03:19:19.334604       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 03:19:19.334678       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.124"]
	E1209 03:19:19.334846       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 03:19:19.383686       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1209 03:19:19.383841       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 03:19:19.383874       1 server_linux.go:132] "Using iptables Proxier"
	I1209 03:19:19.398573       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 03:19:19.399079       1 server.go:527] "Version info" version="v1.34.2"
	I1209 03:19:19.399315       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 03:19:19.405374       1 config.go:200] "Starting service config controller"
	I1209 03:19:19.405792       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 03:19:19.405905       1 config.go:309] "Starting node config controller"
	I1209 03:19:19.405927       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 03:19:19.405942       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 03:19:19.406268       1 config.go:106] "Starting endpoint slice config controller"
	I1209 03:19:19.408469       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 03:19:19.406457       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 03:19:19.408543       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 03:19:19.506978       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 03:19:19.509285       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 03:19:19.509389       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a4a0bac8a98646d5c29d58b036502fcf131af10f10fa52995741cab94b0da2a1] <==
	I1209 03:19:17.249755       1 serving.go:386] Generated self-signed cert in-memory
	W1209 03:19:18.574454       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 03:19:18.576965       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 03:19:18.577221       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 03:19:18.577338       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 03:19:18.647296       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1209 03:19:18.647326       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 03:19:18.649857       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 03:19:18.649959       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 03:19:18.650068       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 03:19:18.649976       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 03:19:18.752077       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [de7292ca8714180c12d0930642833f614c66bde369219707289a884965f99ae3] <==
	I1209 03:19:00.538087       1 serving.go:386] Generated self-signed cert in-memory
	W1209 03:19:01.580975       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 03:19:01.581021       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 03:19:01.581032       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 03:19:01.581038       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 03:19:01.675823       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1209 03:19:01.677429       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1209 03:19:01.677508       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1209 03:19:01.684409       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 03:19:01.684570       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 03:19:01.688151       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 03:19:01.684590       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1209 03:19:01.688590       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	E1209 03:19:01.690792       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 03:19:01.690900       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 03:19:01.690981       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1209 03:19:01.691064       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1209 03:19:01.691085       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1209 03:19:01.691111       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 09 03:19:16 pause-739105 kubelet[3875]: E1209 03:19:16.910207    3875 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-739105\" not found" node="pause-739105"
	Dec 09 03:19:17 pause-739105 kubelet[3875]: E1209 03:19:17.911224    3875 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-739105\" not found" node="pause-739105"
	Dec 09 03:19:17 pause-739105 kubelet[3875]: E1209 03:19:17.913121    3875 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-739105\" not found" node="pause-739105"
	Dec 09 03:19:17 pause-739105 kubelet[3875]: E1209 03:19:17.913359    3875 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-739105\" not found" node="pause-739105"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.612458    3875 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-739105"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.696386    3875 apiserver.go:52] "Watching apiserver"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.717694    3875 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.741928    3875 kubelet_node_status.go:124] "Node was previously registered" node="pause-739105"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.742048    3875 kubelet_node_status.go:78] "Successfully registered node" node="pause-739105"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.742076    3875 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.745411    3875 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: E1209 03:19:18.767528    3875 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-739105\" already exists" pod="kube-system/kube-controller-manager-pause-739105"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.767546    3875 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-739105"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.771930    3875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad6d4576-8e92-4abd-8193-d8b9ddd7266d-lib-modules\") pod \"kube-proxy-rxfdq\" (UID: \"ad6d4576-8e92-4abd-8193-d8b9ddd7266d\") " pod="kube-system/kube-proxy-rxfdq"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.771977    3875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad6d4576-8e92-4abd-8193-d8b9ddd7266d-xtables-lock\") pod \"kube-proxy-rxfdq\" (UID: \"ad6d4576-8e92-4abd-8193-d8b9ddd7266d\") " pod="kube-system/kube-proxy-rxfdq"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: E1209 03:19:18.784434    3875 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-739105\" already exists" pod="kube-system/kube-scheduler-pause-739105"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.784480    3875 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-739105"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: E1209 03:19:18.804539    3875 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-739105\" already exists" pod="kube-system/etcd-pause-739105"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.804587    3875 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-739105"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: E1209 03:19:18.820664    3875 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-739105\" already exists" pod="kube-system/kube-apiserver-pause-739105"
	Dec 09 03:19:19 pause-739105 kubelet[3875]: I1209 03:19:19.003699    3875 scope.go:117] "RemoveContainer" containerID="a7ae140d28849706ed992a697e3d674291e73d88d78d98be55c0ec624c7bfa79"
	Dec 09 03:19:24 pause-739105 kubelet[3875]: E1209 03:19:24.849893    3875 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765250364849031868 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 09 03:19:24 pause-739105 kubelet[3875]: E1209 03:19:24.849937    3875 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765250364849031868 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 09 03:19:34 pause-739105 kubelet[3875]: E1209 03:19:34.852801    3875 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765250374852310367 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 09 03:19:34 pause-739105 kubelet[3875]: E1209 03:19:34.852825    3875 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765250374852310367 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-739105 -n pause-739105
helpers_test.go:269: (dbg) Run:  kubectl --context pause-739105 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-739105 -n pause-739105
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-739105 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-739105 logs -n 25: (1.770495283s)
E1209 03:19:39.457093  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p NoKubernetes-992827                                                                                                                                                                                                  │ NoKubernetes-992827       │ jenkins │ v1.37.0 │ 09 Dec 25 03:15 UTC │ 09 Dec 25 03:15 UTC │
	│ start   │ -p NoKubernetes-992827 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                     │ NoKubernetes-992827       │ jenkins │ v1.37.0 │ 09 Dec 25 03:15 UTC │ 09 Dec 25 03:16 UTC │
	│ ssh     │ force-systemd-flag-150140 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                    │ force-systemd-flag-150140 │ jenkins │ v1.37.0 │ 09 Dec 25 03:15 UTC │ 09 Dec 25 03:15 UTC │
	│ delete  │ -p force-systemd-flag-150140                                                                                                                                                                                            │ force-systemd-flag-150140 │ jenkins │ v1.37.0 │ 09 Dec 25 03:15 UTC │ 09 Dec 25 03:15 UTC │
	│ start   │ -p cert-options-358032 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-358032       │ jenkins │ v1.37.0 │ 09 Dec 25 03:15 UTC │ 09 Dec 25 03:16 UTC │
	│ ssh     │ -p NoKubernetes-992827 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-992827       │ jenkins │ v1.37.0 │ 09 Dec 25 03:16 UTC │                     │
	│ stop    │ -p NoKubernetes-992827                                                                                                                                                                                                  │ NoKubernetes-992827       │ jenkins │ v1.37.0 │ 09 Dec 25 03:16 UTC │ 09 Dec 25 03:16 UTC │
	│ start   │ -p NoKubernetes-992827 --driver=kvm2  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-992827       │ jenkins │ v1.37.0 │ 09 Dec 25 03:16 UTC │ 09 Dec 25 03:16 UTC │
	│ ssh     │ -p NoKubernetes-992827 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-992827       │ jenkins │ v1.37.0 │ 09 Dec 25 03:16 UTC │                     │
	│ delete  │ -p NoKubernetes-992827                                                                                                                                                                                                  │ NoKubernetes-992827       │ jenkins │ v1.37.0 │ 09 Dec 25 03:16 UTC │ 09 Dec 25 03:16 UTC │
	│ start   │ -p kubernetes-upgrade-321262 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                  │ kubernetes-upgrade-321262 │ jenkins │ v1.37.0 │ 09 Dec 25 03:16 UTC │ 09 Dec 25 03:17 UTC │
	│ ssh     │ cert-options-358032 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                             │ cert-options-358032       │ jenkins │ v1.37.0 │ 09 Dec 25 03:16 UTC │ 09 Dec 25 03:16 UTC │
	│ ssh     │ -p cert-options-358032 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                           │ cert-options-358032       │ jenkins │ v1.37.0 │ 09 Dec 25 03:16 UTC │ 09 Dec 25 03:16 UTC │
	│ delete  │ -p cert-options-358032                                                                                                                                                                                                  │ cert-options-358032       │ jenkins │ v1.37.0 │ 09 Dec 25 03:16 UTC │ 09 Dec 25 03:16 UTC │
	│ start   │ -p pause-739105 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                 │ pause-739105              │ jenkins │ v1.37.0 │ 09 Dec 25 03:16 UTC │ 09 Dec 25 03:18 UTC │
	│ stop    │ -p kubernetes-upgrade-321262                                                                                                                                                                                            │ kubernetes-upgrade-321262 │ jenkins │ v1.37.0 │ 09 Dec 25 03:17 UTC │ 09 Dec 25 03:17 UTC │
	│ start   │ -p kubernetes-upgrade-321262 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                           │ kubernetes-upgrade-321262 │ jenkins │ v1.37.0 │ 09 Dec 25 03:17 UTC │ 09 Dec 25 03:18 UTC │
	│ start   │ -p kubernetes-upgrade-321262 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                                                                                         │ kubernetes-upgrade-321262 │ jenkins │ v1.37.0 │ 09 Dec 25 03:18 UTC │                     │
	│ start   │ -p kubernetes-upgrade-321262 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                           │ kubernetes-upgrade-321262 │ jenkins │ v1.37.0 │ 09 Dec 25 03:18 UTC │ 09 Dec 25 03:18 UTC │
	│ delete  │ -p kubernetes-upgrade-321262                                                                                                                                                                                            │ kubernetes-upgrade-321262 │ jenkins │ v1.37.0 │ 09 Dec 25 03:18 UTC │ 09 Dec 25 03:18 UTC │
	│ start   │ -p stopped-upgrade-644254 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-644254    │ jenkins │ v1.35.0 │ 09 Dec 25 03:18 UTC │ 09 Dec 25 03:19 UTC │
	│ start   │ -p pause-739105 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-739105              │ jenkins │ v1.37.0 │ 09 Dec 25 03:18 UTC │ 09 Dec 25 03:19 UTC │
	│ stop    │ stopped-upgrade-644254 stop                                                                                                                                                                                             │ stopped-upgrade-644254    │ jenkins │ v1.35.0 │ 09 Dec 25 03:19 UTC │ 09 Dec 25 03:19 UTC │
	│ start   │ -p stopped-upgrade-644254 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                  │ stopped-upgrade-644254    │ jenkins │ v1.37.0 │ 09 Dec 25 03:19 UTC │                     │
	│ start   │ -p cert-expiration-699833 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                                                                                 │ cert-expiration-699833    │ jenkins │ v1.37.0 │ 09 Dec 25 03:19 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 03:19:11
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 03:19:11.159161  295238 out.go:360] Setting OutFile to fd 1 ...
	I1209 03:19:11.159279  295238 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:19:11.159283  295238 out.go:374] Setting ErrFile to fd 2...
	I1209 03:19:11.159287  295238 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:19:11.159593  295238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 03:19:11.160351  295238 out.go:368] Setting JSON to false
	I1209 03:19:11.161716  295238 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":32501,"bootTime":1765217850,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 03:19:11.161787  295238 start.go:143] virtualization: kvm guest
	I1209 03:19:11.164912  295238 out.go:179] * [cert-expiration-699833] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 03:19:11.166722  295238 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 03:19:11.166737  295238 notify.go:221] Checking for updates...
	I1209 03:19:11.170050  295238 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:19:11.171845  295238 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 03:19:11.173233  295238 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	I1209 03:19:11.174650  295238 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 03:19:11.176092  295238 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:19:11.178509  295238 config.go:182] Loaded profile config "cert-expiration-699833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 03:19:11.179356  295238 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 03:19:11.218318  295238 out.go:179] * Using the kvm2 driver based on existing profile
	I1209 03:19:11.219755  295238 start.go:309] selected driver: kvm2
	I1209 03:19:11.219768  295238 start.go:927] validating driver "kvm2" against &{Name:cert-expiration-699833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.34.2 ClusterName:cert-expiration-699833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.113 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:19:11.219959  295238 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:19:11.221017  295238 cni.go:84] Creating CNI manager for ""
	I1209 03:19:11.221069  295238 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 03:19:11.221107  295238 start.go:353] cluster config:
	{Name:cert-expiration-699833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-699833 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.113 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:19:11.221212  295238 iso.go:125] acquiring lock: {Name:mk5e3a22cdf6cd1ed24c9a04adaf1049140c04b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:19:11.223204  295238 out.go:179] * Starting "cert-expiration-699833" primary control-plane node in "cert-expiration-699833" cluster
	I1209 03:19:10.808944  291970 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I1209 03:19:10.809756  291970 api_server.go:269] stopped: https://192.168.39.194:8443/healthz: Get "https://192.168.39.194:8443/healthz": dial tcp 192.168.39.194:8443: connect: connection refused
	I1209 03:19:10.809847  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 03:19:10.809916  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 03:19:10.857603  291970 cri.go:89] found id: "9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:10.857635  291970 cri.go:89] found id: ""
	I1209 03:19:10.857646  291970 logs.go:282] 1 containers: [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb]
	I1209 03:19:10.857752  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:10.862967  291970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 03:19:10.863073  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 03:19:10.911502  291970 cri.go:89] found id: "ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:10.911529  291970 cri.go:89] found id: ""
	I1209 03:19:10.911538  291970 logs.go:282] 1 containers: [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38]
	I1209 03:19:10.911615  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:10.917031  291970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 03:19:10.917128  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 03:19:10.976568  291970 cri.go:89] found id: "02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:10.976602  291970 cri.go:89] found id: ""
	I1209 03:19:10.976615  291970 logs.go:282] 1 containers: [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb]
	I1209 03:19:10.976697  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:10.982080  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 03:19:10.982170  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 03:19:11.035049  291970 cri.go:89] found id: "252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:11.035078  291970 cri.go:89] found id: "3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:11.035082  291970 cri.go:89] found id: ""
	I1209 03:19:11.035090  291970 logs.go:282] 2 containers: [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81]
	I1209 03:19:11.035157  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:11.040279  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:11.045064  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 03:19:11.045151  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 03:19:11.085504  291970 cri.go:89] found id: "7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:11.085529  291970 cri.go:89] found id: ""
	I1209 03:19:11.085538  291970 logs.go:282] 1 containers: [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18]
	I1209 03:19:11.085596  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:11.090571  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 03:19:11.090642  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 03:19:11.149876  291970 cri.go:89] found id: "6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:11.149902  291970 cri.go:89] found id: ""
	I1209 03:19:11.149911  291970 logs.go:282] 1 containers: [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50]
	I1209 03:19:11.149976  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:11.158605  291970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 03:19:11.158684  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 03:19:11.209512  291970 cri.go:89] found id: ""
	I1209 03:19:11.209545  291970 logs.go:282] 0 containers: []
	W1209 03:19:11.209557  291970 logs.go:284] No container was found matching "kindnet"
	I1209 03:19:11.209564  291970 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 03:19:11.209631  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 03:19:11.257904  291970 cri.go:89] found id: "3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:11.257939  291970 cri.go:89] found id: ""
	I1209 03:19:11.257952  291970 logs.go:282] 1 containers: [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9]
	I1209 03:19:11.258048  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:11.264035  291970 logs.go:123] Gathering logs for kubelet ...
	I1209 03:19:11.264065  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:19:11.378322  291970 logs.go:123] Gathering logs for dmesg ...
	I1209 03:19:11.378366  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:19:11.397556  291970 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:19:11.397612  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 03:19:11.482941  291970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 03:19:11.482972  291970 logs.go:123] Gathering logs for kube-apiserver [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb] ...
	I1209 03:19:11.483000  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:11.529147  291970 logs.go:123] Gathering logs for kube-scheduler [3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81] ...
	I1209 03:19:11.529185  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:11.589466  291970 logs.go:123] Gathering logs for kube-proxy [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18] ...
	I1209 03:19:11.589501  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:11.634447  291970 logs.go:123] Gathering logs for kube-controller-manager [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50] ...
	I1209 03:19:11.634487  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:11.677926  291970 logs.go:123] Gathering logs for etcd [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38] ...
	I1209 03:19:11.677975  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:11.741376  291970 logs.go:123] Gathering logs for coredns [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb] ...
	I1209 03:19:11.741436  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:11.788015  291970 logs.go:123] Gathering logs for kube-scheduler [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8] ...
	I1209 03:19:11.788057  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:11.875753  291970 logs.go:123] Gathering logs for storage-provisioner [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9] ...
	I1209 03:19:11.875797  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:11.924741  291970 logs.go:123] Gathering logs for CRI-O ...
	I1209 03:19:11.924781  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 03:19:12.291197  291970 logs.go:123] Gathering logs for container status ...
	I1209 03:19:12.291261  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:19:12.180443  294888 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 a7ae140d28849706ed992a697e3d674291e73d88d78d98be55c0ec624c7bfa79 de7292ca8714180c12d0930642833f614c66bde369219707289a884965f99ae3 d4e129d74e7c8c6290797c64aa8bec8b1aff4105e7c26698b9f7aba0e9db897f 4f488569925dc632f4dd1ca1b39df71b05e838a45e325da6c9ccc93996ce130c 026aaeb74336572baaaf4524f3103e9e18100483e64abfe1932d87eab57a79dc 14444bc2d3af3e65b99e16ca7ed91add6e58b282592f24742c4b426c0748bf39 5138113956e089324ea0848fc4790e50de8b0ee09cf9fdd118b82f91c472562e 18cfa357d2ab07d09e4da9dddc0d38271fe137c96d6622c238fbee708bf935f4 376cab59933e3388b96f857dfa05e838511dd7b6779ffcac8c061855adc1855d e40b35dad2ad345edf9be43d0fb0d94f4e825b44eb65ddcec0728f0d726d297b e63cf1615052ef840d03a63a203cda43fe9bbcd1ed6faa309baacdada59acbcd: (14.059466258s)
	W1209 03:19:12.180545  294888 kubeadm.go:649] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 a7ae140d28849706ed992a697e3d674291e73d88d78d98be55c0ec624c7bfa79 de7292ca8714180c12d0930642833f614c66bde369219707289a884965f99ae3 d4e129d74e7c8c6290797c64aa8bec8b1aff4105e7c26698b9f7aba0e9db897f 4f488569925dc632f4dd1ca1b39df71b05e838a45e325da6c9ccc93996ce130c 026aaeb74336572baaaf4524f3103e9e18100483e64abfe1932d87eab57a79dc 14444bc2d3af3e65b99e16ca7ed91add6e58b282592f24742c4b426c0748bf39 5138113956e089324ea0848fc4790e50de8b0ee09cf9fdd118b82f91c472562e 18cfa357d2ab07d09e4da9dddc0d38271fe137c96d6622c238fbee708bf935f4 376cab59933e3388b96f857dfa05e838511dd7b6779ffcac8c061855adc1855d e40b35dad2ad345edf9be43d0fb0d94f4e825b44eb65ddcec0728f0d726d297b e63cf1615052ef840d03a63a203cda43fe9bbcd1ed6faa309baacdada59acbcd: Process exited with status 1
	stdout:
	a7ae140d28849706ed992a697e3d674291e73d88d78d98be55c0ec624c7bfa79
	de7292ca8714180c12d0930642833f614c66bde369219707289a884965f99ae3
	d4e129d74e7c8c6290797c64aa8bec8b1aff4105e7c26698b9f7aba0e9db897f
	4f488569925dc632f4dd1ca1b39df71b05e838a45e325da6c9ccc93996ce130c
	026aaeb74336572baaaf4524f3103e9e18100483e64abfe1932d87eab57a79dc
	14444bc2d3af3e65b99e16ca7ed91add6e58b282592f24742c4b426c0748bf39
	
	stderr:
	E1209 03:19:12.172496    3620 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5138113956e089324ea0848fc4790e50de8b0ee09cf9fdd118b82f91c472562e\": container with ID starting with 5138113956e089324ea0848fc4790e50de8b0ee09cf9fdd118b82f91c472562e not found: ID does not exist" containerID="5138113956e089324ea0848fc4790e50de8b0ee09cf9fdd118b82f91c472562e"
	time="2025-12-09T03:19:12Z" level=fatal msg="stopping the container \"5138113956e089324ea0848fc4790e50de8b0ee09cf9fdd118b82f91c472562e\": rpc error: code = NotFound desc = could not find container \"5138113956e089324ea0848fc4790e50de8b0ee09cf9fdd118b82f91c472562e\": container with ID starting with 5138113956e089324ea0848fc4790e50de8b0ee09cf9fdd118b82f91c472562e not found: ID does not exist"
	I1209 03:19:12.180651  294888 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 03:19:12.224910  294888 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 03:19:12.243921  294888 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  9 03:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5638 Dec  9 03:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1954 Dec  9 03:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5586 Dec  9 03:17 /etc/kubernetes/scheduler.conf
	
	I1209 03:19:12.244014  294888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 03:19:12.260292  294888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 03:19:12.276150  294888 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 03:19:12.276246  294888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 03:19:12.295723  294888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 03:19:12.312370  294888 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 03:19:12.312451  294888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 03:19:12.329403  294888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 03:19:12.344645  294888 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 03:19:12.344741  294888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 03:19:12.362329  294888 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 03:19:12.378544  294888 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:19:12.439535  294888 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:19:14.238369  294888 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.798779031s)
	I1209 03:19:14.238461  294888 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:19:10.084020  295180 out.go:252] * Restarting existing kvm2 VM for "stopped-upgrade-644254" ...
	I1209 03:19:10.084137  295180 main.go:143] libmachine: starting domain...
	I1209 03:19:10.084156  295180 main.go:143] libmachine: ensuring networks are active...
	I1209 03:19:10.085277  295180 main.go:143] libmachine: Ensuring network default is active
	I1209 03:19:10.085804  295180 main.go:143] libmachine: Ensuring network mk-stopped-upgrade-644254 is active
	I1209 03:19:10.086460  295180 main.go:143] libmachine: getting domain XML...
	I1209 03:19:10.087737  295180 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>stopped-upgrade-644254</name>
	  <uuid>03069003-742c-4b71-8624-52d7d2c4f9eb</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22081-254936/.minikube/machines/stopped-upgrade-644254/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22081-254936/.minikube/machines/stopped-upgrade-644254/stopped-upgrade-644254.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:f1:f3:a2'/>
	      <source network='mk-stopped-upgrade-644254'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:45:10:64'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1209 03:19:11.593103  295180 main.go:143] libmachine: waiting for domain to start...
	I1209 03:19:11.594792  295180 main.go:143] libmachine: domain is now running
	I1209 03:19:11.594809  295180 main.go:143] libmachine: waiting for IP...
	I1209 03:19:11.595805  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:11.596509  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has current primary IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:11.596529  295180 main.go:143] libmachine: found domain IP: 192.168.61.28
	I1209 03:19:11.596537  295180 main.go:143] libmachine: reserving static IP address...
	I1209 03:19:11.596993  295180 main.go:143] libmachine: found host DHCP lease matching {name: "stopped-upgrade-644254", mac: "52:54:00:f1:f3:a2", ip: "192.168.61.28"} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:18:44 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:11.597044  295180 main.go:143] libmachine: skip adding static IP to network mk-stopped-upgrade-644254 - found existing host DHCP lease matching {name: "stopped-upgrade-644254", mac: "52:54:00:f1:f3:a2", ip: "192.168.61.28"}
	I1209 03:19:11.597057  295180 main.go:143] libmachine: reserved static IP address 192.168.61.28 for domain stopped-upgrade-644254
	I1209 03:19:11.597068  295180 main.go:143] libmachine: waiting for SSH...
	I1209 03:19:11.597076  295180 main.go:143] libmachine: Getting to WaitForSSH function...
	I1209 03:19:11.600041  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:11.600641  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:18:44 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:11.600681  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:11.600979  295180 main.go:143] libmachine: Using SSH client type: native
	I1209 03:19:11.601347  295180 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I1209 03:19:11.601372  295180 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1209 03:19:14.704103  295180 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.61.28:22: connect: no route to host
	I1209 03:19:11.224687  295238 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 03:19:11.224721  295238 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1209 03:19:11.224742  295238 cache.go:65] Caching tarball of preloaded images
	I1209 03:19:11.224927  295238 preload.go:238] Found /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 03:19:11.224939  295238 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 03:19:11.225089  295238 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/cert-expiration-699833/config.json ...
	I1209 03:19:11.225427  295238 start.go:360] acquireMachinesLock for cert-expiration-699833: {Name:mkb4bf4bc2a6ad90b53de9be214957ca6809cd32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:19:14.856908  291970 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I1209 03:19:14.857647  291970 api_server.go:269] stopped: https://192.168.39.194:8443/healthz: Get "https://192.168.39.194:8443/healthz": dial tcp 192.168.39.194:8443: connect: connection refused
	I1209 03:19:14.857712  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 03:19:14.857764  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 03:19:14.911951  291970 cri.go:89] found id: "9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:14.911984  291970 cri.go:89] found id: ""
	I1209 03:19:14.912008  291970 logs.go:282] 1 containers: [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb]
	I1209 03:19:14.912084  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:14.916641  291970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 03:19:14.916724  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 03:19:14.960559  291970 cri.go:89] found id: "ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:14.960591  291970 cri.go:89] found id: ""
	I1209 03:19:14.960601  291970 logs.go:282] 1 containers: [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38]
	I1209 03:19:14.960680  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:14.966648  291970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 03:19:14.966750  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 03:19:15.009754  291970 cri.go:89] found id: "02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:15.009785  291970 cri.go:89] found id: ""
	I1209 03:19:15.009797  291970 logs.go:282] 1 containers: [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb]
	I1209 03:19:15.009881  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:15.015933  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 03:19:15.016013  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 03:19:15.070436  291970 cri.go:89] found id: "252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:15.070466  291970 cri.go:89] found id: "3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:15.070471  291970 cri.go:89] found id: ""
	I1209 03:19:15.070481  291970 logs.go:282] 2 containers: [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81]
	I1209 03:19:15.070548  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:15.076510  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:15.082037  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 03:19:15.082126  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 03:19:15.127203  291970 cri.go:89] found id: "7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:15.127236  291970 cri.go:89] found id: ""
	I1209 03:19:15.127249  291970 logs.go:282] 1 containers: [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18]
	I1209 03:19:15.127332  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:15.133987  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 03:19:15.134065  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 03:19:15.180414  291970 cri.go:89] found id: "6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:15.180443  291970 cri.go:89] found id: ""
	I1209 03:19:15.180455  291970 logs.go:282] 1 containers: [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50]
	I1209 03:19:15.180526  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:15.186428  291970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 03:19:15.186537  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 03:19:15.240543  291970 cri.go:89] found id: ""
	I1209 03:19:15.240574  291970 logs.go:282] 0 containers: []
	W1209 03:19:15.240586  291970 logs.go:284] No container was found matching "kindnet"
	I1209 03:19:15.240594  291970 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 03:19:15.240657  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 03:19:15.296407  291970 cri.go:89] found id: "3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:15.296440  291970 cri.go:89] found id: ""
	I1209 03:19:15.296451  291970 logs.go:282] 1 containers: [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9]
	I1209 03:19:15.296528  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:15.302721  291970 logs.go:123] Gathering logs for etcd [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38] ...
	I1209 03:19:15.302755  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:15.372638  291970 logs.go:123] Gathering logs for coredns [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb] ...
	I1209 03:19:15.372691  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:15.420693  291970 logs.go:123] Gathering logs for kube-proxy [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18] ...
	I1209 03:19:15.420732  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:15.469893  291970 logs.go:123] Gathering logs for storage-provisioner [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9] ...
	I1209 03:19:15.469948  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:15.521259  291970 logs.go:123] Gathering logs for CRI-O ...
	I1209 03:19:15.521302  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 03:19:15.904296  291970 logs.go:123] Gathering logs for container status ...
	I1209 03:19:15.904341  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:19:15.957536  291970 logs.go:123] Gathering logs for dmesg ...
	I1209 03:19:15.957579  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:19:15.980131  291970 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:19:15.980178  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 03:19:16.059315  291970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 03:19:16.059349  291970 logs.go:123] Gathering logs for kube-scheduler [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8] ...
	I1209 03:19:16.059368  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:16.169095  291970 logs.go:123] Gathering logs for kube-scheduler [3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81] ...
	I1209 03:19:16.169144  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:16.229345  291970 logs.go:123] Gathering logs for kube-controller-manager [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50] ...
	I1209 03:19:16.229398  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:16.282639  291970 logs.go:123] Gathering logs for kubelet ...
	I1209 03:19:16.282675  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:19:16.426733  291970 logs.go:123] Gathering logs for kube-apiserver [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb] ...
	I1209 03:19:16.426780  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:14.569336  294888 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:19:14.642490  294888 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:19:14.752905  294888 api_server.go:52] waiting for apiserver process to appear ...
	I1209 03:19:14.753027  294888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:19:15.254034  294888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:19:15.753198  294888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:19:15.799437  294888 api_server.go:72] duration metric: took 1.046549209s to wait for apiserver process to appear ...
	I1209 03:19:15.799468  294888 api_server.go:88] waiting for apiserver healthz status ...
	I1209 03:19:15.799493  294888 api_server.go:253] Checking apiserver healthz at https://192.168.72.124:8443/healthz ...
	I1209 03:19:18.508800  294888 api_server.go:279] https://192.168.72.124:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 03:19:18.508852  294888 api_server.go:103] status: https://192.168.72.124:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 03:19:18.508872  294888 api_server.go:253] Checking apiserver healthz at https://192.168.72.124:8443/healthz ...
	I1209 03:19:18.557050  294888 api_server.go:279] https://192.168.72.124:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 03:19:18.557090  294888 api_server.go:103] status: https://192.168.72.124:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 03:19:18.800529  294888 api_server.go:253] Checking apiserver healthz at https://192.168.72.124:8443/healthz ...
	I1209 03:19:18.806571  294888 api_server.go:279] https://192.168.72.124:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 03:19:18.806602  294888 api_server.go:103] status: https://192.168.72.124:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 03:19:19.300341  294888 api_server.go:253] Checking apiserver healthz at https://192.168.72.124:8443/healthz ...
	I1209 03:19:19.306985  294888 api_server.go:279] https://192.168.72.124:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 03:19:19.307027  294888 api_server.go:103] status: https://192.168.72.124:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 03:19:19.799767  294888 api_server.go:253] Checking apiserver healthz at https://192.168.72.124:8443/healthz ...
	I1209 03:19:19.805846  294888 api_server.go:279] https://192.168.72.124:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 03:19:19.805883  294888 api_server.go:103] status: https://192.168.72.124:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 03:19:20.299551  294888 api_server.go:253] Checking apiserver healthz at https://192.168.72.124:8443/healthz ...
	I1209 03:19:20.306319  294888 api_server.go:279] https://192.168.72.124:8443/healthz returned 200:
	ok
	I1209 03:19:20.319019  294888 api_server.go:141] control plane version: v1.34.2
	I1209 03:19:20.319057  294888 api_server.go:131] duration metric: took 4.5195811s to wait for apiserver health ...
	I1209 03:19:20.319069  294888 cni.go:84] Creating CNI manager for ""
	I1209 03:19:20.319078  294888 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 03:19:20.321406  294888 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 03:19:20.322999  294888 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 03:19:20.342654  294888 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 03:19:20.373443  294888 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 03:19:20.382359  294888 system_pods.go:59] 6 kube-system pods found
	I1209 03:19:20.382423  294888 system_pods.go:61] "coredns-66bc5c9577-pt698" [d79e9e39-615a-4e96-afd4-3b7e856cc3f4] Running
	I1209 03:19:20.382444  294888 system_pods.go:61] "etcd-pause-739105" [dce64bb8-662c-4e83-87d0-fa92866158e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 03:19:20.382458  294888 system_pods.go:61] "kube-apiserver-pause-739105" [e5bcabca-af2b-4f32-a16e-505e11121da2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 03:19:20.382475  294888 system_pods.go:61] "kube-controller-manager-pause-739105" [d2343ee0-bbbe-4f54-99ed-558aac463ec4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 03:19:20.382489  294888 system_pods.go:61] "kube-proxy-rxfdq" [ad6d4576-8e92-4abd-8193-d8b9ddd7266d] Running
	I1209 03:19:20.382504  294888 system_pods.go:61] "kube-scheduler-pause-739105" [2dfa00a8-526b-4380-bc39-645001782835] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 03:19:20.382515  294888 system_pods.go:74] duration metric: took 9.041331ms to wait for pod list to return data ...
	I1209 03:19:20.382531  294888 node_conditions.go:102] verifying NodePressure condition ...
	I1209 03:19:20.387618  294888 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 03:19:20.387657  294888 node_conditions.go:123] node cpu capacity is 2
	I1209 03:19:20.387676  294888 node_conditions.go:105] duration metric: took 5.138717ms to run NodePressure ...
	I1209 03:19:20.387748  294888 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:19:20.716929  294888 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1209 03:19:20.722989  294888 kubeadm.go:744] kubelet initialised
	I1209 03:19:20.723024  294888 kubeadm.go:745] duration metric: took 6.058648ms waiting for restarted kubelet to initialise ...
	I1209 03:19:20.723049  294888 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 03:19:20.757220  294888 ops.go:34] apiserver oom_adj: -16
	I1209 03:19:20.757255  294888 kubeadm.go:602] duration metric: took 22.789016706s to restartPrimaryControlPlane
	I1209 03:19:20.757270  294888 kubeadm.go:403] duration metric: took 22.955955066s to StartCluster
	I1209 03:19:20.757294  294888 settings.go:142] acquiring lock: {Name:mkec34d0133156567c6c6050ab2f8de3f197c63b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:19:20.757394  294888 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 03:19:20.758934  294888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/kubeconfig: {Name:mkaafbe94dbea876978b17d37022d815642e1aad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:19:20.759300  294888 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.72.124 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 03:19:20.759452  294888 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 03:19:20.759570  294888 config.go:182] Loaded profile config "pause-739105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 03:19:20.761167  294888 out.go:179] * Verifying Kubernetes components...
	I1209 03:19:20.761203  294888 out.go:179] * Enabled addons: 
	I1209 03:19:18.984449  291970 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I1209 03:19:18.985280  291970 api_server.go:269] stopped: https://192.168.39.194:8443/healthz: Get "https://192.168.39.194:8443/healthz": dial tcp 192.168.39.194:8443: connect: connection refused
	I1209 03:19:18.985347  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 03:19:18.985414  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 03:19:19.030950  291970 cri.go:89] found id: "9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:19.030975  291970 cri.go:89] found id: ""
	I1209 03:19:19.030984  291970 logs.go:282] 1 containers: [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb]
	I1209 03:19:19.031057  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:19.037226  291970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 03:19:19.037325  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 03:19:19.085128  291970 cri.go:89] found id: "ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:19.085160  291970 cri.go:89] found id: ""
	I1209 03:19:19.085172  291970 logs.go:282] 1 containers: [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38]
	I1209 03:19:19.085253  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:19.091568  291970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 03:19:19.091668  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 03:19:19.159167  291970 cri.go:89] found id: "02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:19.159201  291970 cri.go:89] found id: ""
	I1209 03:19:19.159214  291970 logs.go:282] 1 containers: [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb]
	I1209 03:19:19.159300  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:19.164545  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 03:19:19.164653  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 03:19:19.211716  291970 cri.go:89] found id: "252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:19.211744  291970 cri.go:89] found id: "3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:19.211749  291970 cri.go:89] found id: ""
	I1209 03:19:19.211760  291970 logs.go:282] 2 containers: [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81]
	I1209 03:19:19.211855  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:19.218343  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:19.224001  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 03:19:19.224089  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 03:19:19.271073  291970 cri.go:89] found id: "7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:19.271105  291970 cri.go:89] found id: ""
	I1209 03:19:19.271115  291970 logs.go:282] 1 containers: [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18]
	I1209 03:19:19.271183  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:19.275972  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 03:19:19.276062  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 03:19:19.323127  291970 cri.go:89] found id: "6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:19.323162  291970 cri.go:89] found id: ""
	I1209 03:19:19.323174  291970 logs.go:282] 1 containers: [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50]
	I1209 03:19:19.323242  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:19.328603  291970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 03:19:19.328699  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 03:19:19.381128  291970 cri.go:89] found id: ""
	I1209 03:19:19.381159  291970 logs.go:282] 0 containers: []
	W1209 03:19:19.381170  291970 logs.go:284] No container was found matching "kindnet"
	I1209 03:19:19.381177  291970 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 03:19:19.381278  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 03:19:19.425899  291970 cri.go:89] found id: "3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:19.425930  291970 cri.go:89] found id: ""
	I1209 03:19:19.425941  291970 logs.go:282] 1 containers: [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9]
	I1209 03:19:19.426015  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:19.430521  291970 logs.go:123] Gathering logs for kube-scheduler [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8] ...
	I1209 03:19:19.430548  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:19.537382  291970 logs.go:123] Gathering logs for kube-scheduler [3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81] ...
	I1209 03:19:19.537443  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:19.615658  291970 logs.go:123] Gathering logs for kube-proxy [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18] ...
	I1209 03:19:19.615797  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:19.678131  291970 logs.go:123] Gathering logs for kube-controller-manager [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50] ...
	I1209 03:19:19.678189  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:19.729143  291970 logs.go:123] Gathering logs for container status ...
	I1209 03:19:19.729188  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:19:19.782856  291970 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:19:19.782902  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 03:19:19.899676  291970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 03:19:19.899711  291970 logs.go:123] Gathering logs for etcd [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38] ...
	I1209 03:19:19.899734  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:19.952579  291970 logs.go:123] Gathering logs for coredns [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb] ...
	I1209 03:19:19.952620  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:19.997188  291970 logs.go:123] Gathering logs for storage-provisioner [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9] ...
	I1209 03:19:19.997240  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:20.048105  291970 logs.go:123] Gathering logs for CRI-O ...
	I1209 03:19:20.048139  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 03:19:20.402033  291970 logs.go:123] Gathering logs for kubelet ...
	I1209 03:19:20.402087  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:19:20.519493  291970 logs.go:123] Gathering logs for dmesg ...
	I1209 03:19:20.519549  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:19:20.543912  291970 logs.go:123] Gathering logs for kube-apiserver [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb] ...
	I1209 03:19:20.543965  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:23.109313  291970 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I1209 03:19:23.110107  291970 api_server.go:269] stopped: https://192.168.39.194:8443/healthz: Get "https://192.168.39.194:8443/healthz": dial tcp 192.168.39.194:8443: connect: connection refused
	I1209 03:19:23.110198  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 03:19:23.110277  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 03:19:23.175043  291970 cri.go:89] found id: "9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:23.175068  291970 cri.go:89] found id: ""
	I1209 03:19:23.175077  291970 logs.go:282] 1 containers: [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb]
	I1209 03:19:23.175145  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:23.181920  291970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 03:19:23.182029  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 03:19:23.229901  291970 cri.go:89] found id: "ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:23.229934  291970 cri.go:89] found id: ""
	I1209 03:19:23.229946  291970 logs.go:282] 1 containers: [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38]
	I1209 03:19:23.230023  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:23.235301  291970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 03:19:23.235394  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 03:19:23.288345  291970 cri.go:89] found id: "02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:23.288377  291970 cri.go:89] found id: ""
	I1209 03:19:23.288388  291970 logs.go:282] 1 containers: [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb]
	I1209 03:19:23.288463  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:23.293812  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 03:19:23.294040  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 03:19:23.356627  291970 cri.go:89] found id: "252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:23.356658  291970 cri.go:89] found id: "3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:23.356666  291970 cri.go:89] found id: ""
	I1209 03:19:23.356678  291970 logs.go:282] 2 containers: [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81]
	I1209 03:19:23.356758  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:23.363202  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:23.370013  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 03:19:23.370103  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 03:19:23.427699  291970 cri.go:89] found id: "7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:23.427730  291970 cri.go:89] found id: ""
	I1209 03:19:23.427741  291970 logs.go:282] 1 containers: [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18]
	I1209 03:19:23.427817  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:23.435644  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 03:19:23.435753  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 03:19:23.482594  291970 cri.go:89] found id: "6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:23.482629  291970 cri.go:89] found id: ""
	I1209 03:19:23.482642  291970 logs.go:282] 1 containers: [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50]
	I1209 03:19:23.482720  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:23.488088  291970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 03:19:23.488184  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 03:19:23.527661  291970 cri.go:89] found id: ""
	I1209 03:19:23.527686  291970 logs.go:282] 0 containers: []
	W1209 03:19:23.527695  291970 logs.go:284] No container was found matching "kindnet"
	I1209 03:19:23.527701  291970 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 03:19:23.527756  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 03:19:23.574728  291970 cri.go:89] found id: "3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:23.574757  291970 cri.go:89] found id: ""
	I1209 03:19:23.574768  291970 logs.go:282] 1 containers: [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9]
	I1209 03:19:23.574861  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:23.579819  291970 logs.go:123] Gathering logs for dmesg ...
	I1209 03:19:23.579869  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:19:23.599801  291970 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:19:23.599871  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:19:20.762603  294888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:19:20.762602  294888 addons.go:530] duration metric: took 3.168533ms for enable addons: enabled=[]
	I1209 03:19:21.013543  294888 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 03:19:21.046017  294888 node_ready.go:35] waiting up to 6m0s for node "pause-739105" to be "Ready" ...
	I1209 03:19:21.049431  294888 node_ready.go:49] node "pause-739105" is "Ready"
	I1209 03:19:21.049464  294888 node_ready.go:38] duration metric: took 3.383872ms for node "pause-739105" to be "Ready" ...
	I1209 03:19:21.049481  294888 api_server.go:52] waiting for apiserver process to appear ...
	I1209 03:19:21.049535  294888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:19:21.071310  294888 api_server.go:72] duration metric: took 311.962801ms to wait for apiserver process to appear ...
	I1209 03:19:21.071344  294888 api_server.go:88] waiting for apiserver healthz status ...
	I1209 03:19:21.071372  294888 api_server.go:253] Checking apiserver healthz at https://192.168.72.124:8443/healthz ...
	I1209 03:19:21.085102  294888 api_server.go:279] https://192.168.72.124:8443/healthz returned 200:
	ok
	I1209 03:19:21.086551  294888 api_server.go:141] control plane version: v1.34.2
	I1209 03:19:21.086577  294888 api_server.go:131] duration metric: took 15.226442ms to wait for apiserver health ...
	I1209 03:19:21.086587  294888 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 03:19:21.092727  294888 system_pods.go:59] 6 kube-system pods found
	I1209 03:19:21.092753  294888 system_pods.go:61] "coredns-66bc5c9577-pt698" [d79e9e39-615a-4e96-afd4-3b7e856cc3f4] Running
	I1209 03:19:21.092762  294888 system_pods.go:61] "etcd-pause-739105" [dce64bb8-662c-4e83-87d0-fa92866158e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 03:19:21.092769  294888 system_pods.go:61] "kube-apiserver-pause-739105" [e5bcabca-af2b-4f32-a16e-505e11121da2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 03:19:21.092777  294888 system_pods.go:61] "kube-controller-manager-pause-739105" [d2343ee0-bbbe-4f54-99ed-558aac463ec4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 03:19:21.092781  294888 system_pods.go:61] "kube-proxy-rxfdq" [ad6d4576-8e92-4abd-8193-d8b9ddd7266d] Running
	I1209 03:19:21.092788  294888 system_pods.go:61] "kube-scheduler-pause-739105" [2dfa00a8-526b-4380-bc39-645001782835] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 03:19:21.092793  294888 system_pods.go:74] duration metric: took 6.20065ms to wait for pod list to return data ...
	I1209 03:19:21.092803  294888 default_sa.go:34] waiting for default service account to be created ...
	I1209 03:19:21.095508  294888 default_sa.go:45] found service account: "default"
	I1209 03:19:21.095531  294888 default_sa.go:55] duration metric: took 2.721055ms for default service account to be created ...
	I1209 03:19:21.095542  294888 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 03:19:21.099792  294888 system_pods.go:86] 6 kube-system pods found
	I1209 03:19:21.099820  294888 system_pods.go:89] "coredns-66bc5c9577-pt698" [d79e9e39-615a-4e96-afd4-3b7e856cc3f4] Running
	I1209 03:19:21.099858  294888 system_pods.go:89] "etcd-pause-739105" [dce64bb8-662c-4e83-87d0-fa92866158e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 03:19:21.099867  294888 system_pods.go:89] "kube-apiserver-pause-739105" [e5bcabca-af2b-4f32-a16e-505e11121da2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 03:19:21.099896  294888 system_pods.go:89] "kube-controller-manager-pause-739105" [d2343ee0-bbbe-4f54-99ed-558aac463ec4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 03:19:21.099903  294888 system_pods.go:89] "kube-proxy-rxfdq" [ad6d4576-8e92-4abd-8193-d8b9ddd7266d] Running
	I1209 03:19:21.099913  294888 system_pods.go:89] "kube-scheduler-pause-739105" [2dfa00a8-526b-4380-bc39-645001782835] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 03:19:21.099934  294888 system_pods.go:126] duration metric: took 4.374846ms to wait for k8s-apps to be running ...
	I1209 03:19:21.099950  294888 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 03:19:21.100015  294888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 03:19:21.117618  294888 system_svc.go:56] duration metric: took 17.654937ms WaitForService to wait for kubelet
	I1209 03:19:21.117655  294888 kubeadm.go:587] duration metric: took 358.316779ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:19:21.117672  294888 node_conditions.go:102] verifying NodePressure condition ...
	I1209 03:19:21.120626  294888 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 03:19:21.120658  294888 node_conditions.go:123] node cpu capacity is 2
	I1209 03:19:21.120675  294888 node_conditions.go:105] duration metric: took 2.997144ms to run NodePressure ...
	I1209 03:19:21.120691  294888 start.go:242] waiting for startup goroutines ...
	I1209 03:19:21.120701  294888 start.go:247] waiting for cluster config update ...
	I1209 03:19:21.120712  294888 start.go:256] writing updated cluster config ...
	I1209 03:19:21.121158  294888 ssh_runner.go:195] Run: rm -f paused
	I1209 03:19:21.130333  294888 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 03:19:21.131462  294888 kapi.go:59] client config for pause-739105: &rest.Config{Host:"https://192.168.72.124:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-254936/.minikube/profiles/pause-739105/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-254936/.minikube/profiles/pause-739105/client.key", CAFile:"/home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[
]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28162e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 03:19:21.135318  294888 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pt698" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:21.141132  294888 pod_ready.go:94] pod "coredns-66bc5c9577-pt698" is "Ready"
	I1209 03:19:21.141168  294888 pod_ready.go:86] duration metric: took 5.813398ms for pod "coredns-66bc5c9577-pt698" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:21.145211  294888 pod_ready.go:83] waiting for pod "etcd-pause-739105" in "kube-system" namespace to be "Ready" or be gone ...
	W1209 03:19:23.154246  294888 pod_ready.go:104] pod "etcd-pause-739105" is not "Ready", error: <nil>
	I1209 03:19:20.784333  295180 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.61.28:22: connect: no route to host
	I1209 03:19:23.785252  295180 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.61.28:22: connect: connection refused
	I1209 03:19:28.033170  295238 start.go:364] duration metric: took 16.807713519s to acquireMachinesLock for "cert-expiration-699833"
	I1209 03:19:28.033218  295238 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:19:28.033224  295238 fix.go:54] fixHost starting: 
	I1209 03:19:28.035810  295238 fix.go:112] recreateIfNeeded on cert-expiration-699833: state=Running err=<nil>
	W1209 03:19:28.035854  295238 fix.go:138] unexpected machine state, will restart: <nil>
	W1209 03:19:23.692434  291970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 03:19:23.692468  291970 logs.go:123] Gathering logs for kube-proxy [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18] ...
	I1209 03:19:23.692485  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:23.736073  291970 logs.go:123] Gathering logs for kube-controller-manager [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50] ...
	I1209 03:19:23.736116  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:23.791444  291970 logs.go:123] Gathering logs for storage-provisioner [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9] ...
	I1209 03:19:23.791491  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:23.835308  291970 logs.go:123] Gathering logs for kubelet ...
	I1209 03:19:23.835348  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:19:23.955906  291970 logs.go:123] Gathering logs for kube-apiserver [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb] ...
	I1209 03:19:23.955948  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:24.008767  291970 logs.go:123] Gathering logs for etcd [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38] ...
	I1209 03:19:24.008820  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:24.063094  291970 logs.go:123] Gathering logs for coredns [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb] ...
	I1209 03:19:24.063133  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:24.113254  291970 logs.go:123] Gathering logs for kube-scheduler [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8] ...
	I1209 03:19:24.113306  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:24.221772  291970 logs.go:123] Gathering logs for kube-scheduler [3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81] ...
	I1209 03:19:24.221841  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:24.275066  291970 logs.go:123] Gathering logs for CRI-O ...
	I1209 03:19:24.275107  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 03:19:24.604703  291970 logs.go:123] Gathering logs for container status ...
	I1209 03:19:24.604744  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:19:27.159010  291970 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I1209 03:19:27.159626  291970 api_server.go:269] stopped: https://192.168.39.194:8443/healthz: Get "https://192.168.39.194:8443/healthz": dial tcp 192.168.39.194:8443: connect: connection refused
	I1209 03:19:27.159688  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 03:19:27.159752  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 03:19:27.207363  291970 cri.go:89] found id: "9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:27.207390  291970 cri.go:89] found id: ""
	I1209 03:19:27.207401  291970 logs.go:282] 1 containers: [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb]
	I1209 03:19:27.207474  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:27.212361  291970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 03:19:27.212438  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 03:19:27.256254  291970 cri.go:89] found id: "ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:27.256284  291970 cri.go:89] found id: ""
	I1209 03:19:27.256298  291970 logs.go:282] 1 containers: [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38]
	I1209 03:19:27.256372  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:27.262300  291970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 03:19:27.262412  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 03:19:27.313414  291970 cri.go:89] found id: "02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:27.313451  291970 cri.go:89] found id: ""
	I1209 03:19:27.313462  291970 logs.go:282] 1 containers: [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb]
	I1209 03:19:27.313539  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:27.326377  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 03:19:27.326479  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 03:19:27.375400  291970 cri.go:89] found id: "252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:27.375425  291970 cri.go:89] found id: "3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:27.375429  291970 cri.go:89] found id: ""
	I1209 03:19:27.375436  291970 logs.go:282] 2 containers: [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81]
	I1209 03:19:27.375516  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:27.380383  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:27.385022  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 03:19:27.385117  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 03:19:27.428243  291970 cri.go:89] found id: "7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:27.428276  291970 cri.go:89] found id: ""
	I1209 03:19:27.428295  291970 logs.go:282] 1 containers: [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18]
	I1209 03:19:27.428374  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:27.434721  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 03:19:27.434821  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 03:19:27.485802  291970 cri.go:89] found id: "6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:27.485849  291970 cri.go:89] found id: ""
	I1209 03:19:27.485866  291970 logs.go:282] 1 containers: [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50]
	I1209 03:19:27.485947  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:27.491916  291970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 03:19:27.492019  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 03:19:27.539186  291970 cri.go:89] found id: ""
	I1209 03:19:27.539231  291970 logs.go:282] 0 containers: []
	W1209 03:19:27.539242  291970 logs.go:284] No container was found matching "kindnet"
	I1209 03:19:27.539248  291970 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 03:19:27.539315  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 03:19:27.589996  291970 cri.go:89] found id: "3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:27.590027  291970 cri.go:89] found id: ""
	I1209 03:19:27.590039  291970 logs.go:282] 1 containers: [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9]
	I1209 03:19:27.590113  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:27.595183  291970 logs.go:123] Gathering logs for etcd [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38] ...
	I1209 03:19:27.595218  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:27.646336  291970 logs.go:123] Gathering logs for coredns [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb] ...
	I1209 03:19:27.646372  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:27.694252  291970 logs.go:123] Gathering logs for kube-scheduler [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8] ...
	I1209 03:19:27.694298  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:27.779189  291970 logs.go:123] Gathering logs for kube-proxy [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18] ...
	I1209 03:19:27.779235  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:27.825991  291970 logs.go:123] Gathering logs for storage-provisioner [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9] ...
	I1209 03:19:27.826027  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:27.882236  291970 logs.go:123] Gathering logs for CRI-O ...
	I1209 03:19:27.882269  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 03:19:28.278459  291970 logs.go:123] Gathering logs for container status ...
	I1209 03:19:28.278520  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:19:28.345959  291970 logs.go:123] Gathering logs for kubelet ...
	I1209 03:19:28.346005  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:19:28.469799  291970 logs.go:123] Gathering logs for dmesg ...
	I1209 03:19:28.469852  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:19:28.490072  291970 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:19:28.490117  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 03:19:28.580525  291970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 03:19:28.580560  291970 logs.go:123] Gathering logs for kube-scheduler [3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81] ...
	I1209 03:19:28.580578  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:28.648093  291970 logs.go:123] Gathering logs for kube-controller-manager [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50] ...
	I1209 03:19:28.648157  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	W1209 03:19:25.650979  294888 pod_ready.go:104] pod "etcd-pause-739105" is not "Ready", error: <nil>
	W1209 03:19:27.652750  294888 pod_ready.go:104] pod "etcd-pause-739105" is not "Ready", error: <nil>
	I1209 03:19:26.891420  295180 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 03:19:26.895080  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:26.895735  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:26.895760  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:26.896063  295180 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/stopped-upgrade-644254/config.json ...
	I1209 03:19:26.896323  295180 machine.go:94] provisionDockerMachine start ...
	I1209 03:19:26.899066  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:26.899546  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:26.899574  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:26.899810  295180 main.go:143] libmachine: Using SSH client type: native
	I1209 03:19:26.900098  295180 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I1209 03:19:26.900111  295180 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 03:19:27.005259  295180 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 03:19:27.005290  295180 buildroot.go:166] provisioning hostname "stopped-upgrade-644254"
	I1209 03:19:27.008437  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.008946  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:27.008991  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.009188  295180 main.go:143] libmachine: Using SSH client type: native
	I1209 03:19:27.009459  295180 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I1209 03:19:27.009476  295180 main.go:143] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-644254 && echo "stopped-upgrade-644254" | sudo tee /etc/hostname
	I1209 03:19:27.131588  295180 main.go:143] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-644254
	
	I1209 03:19:27.134949  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.135364  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:27.135398  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.135696  295180 main.go:143] libmachine: Using SSH client type: native
	I1209 03:19:27.136043  295180 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I1209 03:19:27.136073  295180 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-644254' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-644254/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-644254' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 03:19:27.262416  295180 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 03:19:27.262448  295180 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22081-254936/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-254936/.minikube}
	I1209 03:19:27.262496  295180 buildroot.go:174] setting up certificates
	I1209 03:19:27.262509  295180 provision.go:84] configureAuth start
	I1209 03:19:27.266015  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.266594  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:27.266630  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.269282  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.269684  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:27.269710  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.269919  295180 provision.go:143] copyHostCerts
	I1209 03:19:27.270005  295180 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-254936/.minikube/ca.pem, removing ...
	I1209 03:19:27.270020  295180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-254936/.minikube/ca.pem
	I1209 03:19:27.270098  295180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/ca.pem (1078 bytes)
	I1209 03:19:27.270209  295180 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-254936/.minikube/cert.pem, removing ...
	I1209 03:19:27.270221  295180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-254936/.minikube/cert.pem
	I1209 03:19:27.270251  295180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/cert.pem (1123 bytes)
	I1209 03:19:27.270313  295180 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-254936/.minikube/key.pem, removing ...
	I1209 03:19:27.270323  295180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-254936/.minikube/key.pem
	I1209 03:19:27.270346  295180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/key.pem (1679 bytes)
	I1209 03:19:27.270391  295180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-644254 san=[127.0.0.1 192.168.61.28 localhost minikube stopped-upgrade-644254]
	I1209 03:19:27.316292  295180 provision.go:177] copyRemoteCerts
	I1209 03:19:27.316389  295180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 03:19:27.320013  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.320543  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:27.320570  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.320774  295180 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/stopped-upgrade-644254/id_rsa Username:docker}
	I1209 03:19:27.404703  295180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 03:19:27.436056  295180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 03:19:27.468583  295180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 03:19:27.503130  295180 provision.go:87] duration metric: took 240.60415ms to configureAuth
	I1209 03:19:27.503164  295180 buildroot.go:189] setting minikube options for container-runtime
	I1209 03:19:27.503418  295180 config.go:182] Loaded profile config "stopped-upgrade-644254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1209 03:19:27.506694  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.507111  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:27.507146  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.507363  295180 main.go:143] libmachine: Using SSH client type: native
	I1209 03:19:27.507616  295180 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I1209 03:19:27.507631  295180 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 03:19:27.767173  295180 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 03:19:27.767204  295180 machine.go:97] duration metric: took 870.864436ms to provisionDockerMachine
	I1209 03:19:27.767224  295180 start.go:293] postStartSetup for "stopped-upgrade-644254" (driver="kvm2")
	I1209 03:19:27.767236  295180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 03:19:27.767312  295180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 03:19:27.770491  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.770908  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:27.770948  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.771131  295180 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/stopped-upgrade-644254/id_rsa Username:docker}
	I1209 03:19:27.864490  295180 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 03:19:27.870280  295180 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 03:19:27.870321  295180 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-254936/.minikube/addons for local assets ...
	I1209 03:19:27.870409  295180 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-254936/.minikube/files for local assets ...
	I1209 03:19:27.870488  295180 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-254936/.minikube/files/etc/ssl/certs/2588542.pem -> 2588542.pem in /etc/ssl/certs
	I1209 03:19:27.870612  295180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 03:19:27.882216  295180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/files/etc/ssl/certs/2588542.pem --> /etc/ssl/certs/2588542.pem (1708 bytes)
	I1209 03:19:27.919766  295180 start.go:296] duration metric: took 152.522264ms for postStartSetup
	I1209 03:19:27.919839  295180 fix.go:56] duration metric: took 17.841519475s for fixHost
	I1209 03:19:27.923466  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.923967  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:27.924018  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:27.924320  295180 main.go:143] libmachine: Using SSH client type: native
	I1209 03:19:27.924683  295180 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.61.28 22 <nil> <nil>}
	I1209 03:19:27.924705  295180 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1209 03:19:28.032970  295180 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765250367.994660283
	
	I1209 03:19:28.033002  295180 fix.go:216] guest clock: 1765250367.994660283
	I1209 03:19:28.033013  295180 fix.go:229] Guest: 2025-12-09 03:19:27.994660283 +0000 UTC Remote: 2025-12-09 03:19:27.919846532 +0000 UTC m=+17.963406645 (delta=74.813751ms)
	I1209 03:19:28.033038  295180 fix.go:200] guest clock delta is within tolerance: 74.813751ms
	I1209 03:19:28.033045  295180 start.go:83] releasing machines lock for "stopped-upgrade-644254", held for 17.954752752s
	I1209 03:19:28.036810  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:28.037332  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:28.037358  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:28.037987  295180 ssh_runner.go:195] Run: cat /version.json
	I1209 03:19:28.038076  295180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 03:19:28.042023  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:28.042120  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:28.042528  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:28.042560  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:28.042617  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:28.042649  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:28.043104  295180 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/stopped-upgrade-644254/id_rsa Username:docker}
	I1209 03:19:28.043343  295180 sshutil.go:53] new ssh client: &{IP:192.168.61.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/stopped-upgrade-644254/id_rsa Username:docker}
	W1209 03:19:28.145480  295180 out.go:285] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.35.0 -> Actual minikube version: v1.37.0
	I1209 03:19:28.145576  295180 ssh_runner.go:195] Run: systemctl --version
	I1209 03:19:28.154034  295180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 03:19:28.312741  295180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 03:19:28.321956  295180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 03:19:28.322042  295180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 03:19:28.347140  295180 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 03:19:28.347187  295180 start.go:496] detecting cgroup driver to use...
	I1209 03:19:28.347279  295180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 03:19:28.371115  295180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 03:19:28.390135  295180 docker.go:218] disabling cri-docker service (if available) ...
	I1209 03:19:28.390229  295180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 03:19:28.406773  295180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 03:19:28.422353  295180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 03:19:28.566837  295180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 03:19:28.757321  295180 docker.go:234] disabling docker service ...
	I1209 03:19:28.757430  295180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 03:19:28.776126  295180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 03:19:28.792061  295180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 03:19:28.930819  295180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 03:19:29.088606  295180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 03:19:29.108952  295180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 03:19:29.137429  295180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 03:19:29.137516  295180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:19:29.155693  295180 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 03:19:29.155795  295180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:19:29.169749  295180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:19:29.184644  295180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:19:29.199435  295180 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 03:19:29.213973  295180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:19:29.226356  295180 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:19:29.246914  295180 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 03:19:29.259864  295180 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 03:19:29.271052  295180 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 03:19:29.271120  295180 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 03:19:29.286170  295180 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 03:19:29.297596  295180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:19:29.421906  295180 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 03:19:29.524586  295180 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 03:19:29.524677  295180 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 03:19:29.529841  295180 start.go:564] Will wait 60s for crictl version
	I1209 03:19:29.529933  295180 ssh_runner.go:195] Run: which crictl
	I1209 03:19:29.534354  295180 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 03:19:29.578780  295180 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 03:19:29.578918  295180 ssh_runner.go:195] Run: crio --version
	I1209 03:19:29.615258  295180 ssh_runner.go:195] Run: crio --version
	I1209 03:19:29.651515  295180 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1209 03:19:29.655817  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:29.656239  295180 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f1:f3:a2", ip: ""} in network mk-stopped-upgrade-644254: {Iface:virbr3 ExpiryTime:2025-12-09 04:19:23 +0000 UTC Type:0 Mac:52:54:00:f1:f3:a2 Iaid: IPaddr:192.168.61.28 Prefix:24 Hostname:stopped-upgrade-644254 Clientid:01:52:54:00:f1:f3:a2}
	I1209 03:19:29.656265  295180 main.go:143] libmachine: domain stopped-upgrade-644254 has defined IP address 192.168.61.28 and MAC address 52:54:00:f1:f3:a2 in network mk-stopped-upgrade-644254
	I1209 03:19:29.656471  295180 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1209 03:19:29.661216  295180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 03:19:29.675289  295180 kubeadm.go:884] updating cluster {Name:stopped-upgrade-644254 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:s
topped-upgrade-644254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.28 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 03:19:29.675432  295180 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1209 03:19:29.675491  295180 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 03:19:29.720138  295180 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1209 03:19:29.720215  295180 ssh_runner.go:195] Run: which lz4
	I1209 03:19:29.724787  295180 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 03:19:29.729357  295180 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 03:19:29.729396  295180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1209 03:19:28.037516  295238 out.go:252] * Updating the running kvm2 "cert-expiration-699833" VM ...
	I1209 03:19:28.037542  295238 machine.go:94] provisionDockerMachine start ...
	I1209 03:19:28.041675  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.042423  295238 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e7:a7:4d", ip: ""} in network mk-cert-expiration-699833: {Iface:virbr2 ExpiryTime:2025-12-09 04:15:44 +0000 UTC Type:0 Mac:52:54:00:e7:a7:4d Iaid: IPaddr:192.168.50.113 Prefix:24 Hostname:cert-expiration-699833 Clientid:01:52:54:00:e7:a7:4d}
	I1209 03:19:28.042464  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined IP address 192.168.50.113 and MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.042999  295238 main.go:143] libmachine: Using SSH client type: native
	I1209 03:19:28.043358  295238 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.50.113 22 <nil> <nil>}
	I1209 03:19:28.043369  295238 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 03:19:28.174516  295238 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-699833
	
	I1209 03:19:28.174555  295238 buildroot.go:166] provisioning hostname "cert-expiration-699833"
	I1209 03:19:28.178361  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.178867  295238 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e7:a7:4d", ip: ""} in network mk-cert-expiration-699833: {Iface:virbr2 ExpiryTime:2025-12-09 04:15:44 +0000 UTC Type:0 Mac:52:54:00:e7:a7:4d Iaid: IPaddr:192.168.50.113 Prefix:24 Hostname:cert-expiration-699833 Clientid:01:52:54:00:e7:a7:4d}
	I1209 03:19:28.178900  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined IP address 192.168.50.113 and MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.179078  295238 main.go:143] libmachine: Using SSH client type: native
	I1209 03:19:28.179360  295238 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.50.113 22 <nil> <nil>}
	I1209 03:19:28.179368  295238 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-699833 && echo "cert-expiration-699833" | sudo tee /etc/hostname
	I1209 03:19:28.318360  295238 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-699833
	
	I1209 03:19:28.322533  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.323135  295238 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e7:a7:4d", ip: ""} in network mk-cert-expiration-699833: {Iface:virbr2 ExpiryTime:2025-12-09 04:15:44 +0000 UTC Type:0 Mac:52:54:00:e7:a7:4d Iaid: IPaddr:192.168.50.113 Prefix:24 Hostname:cert-expiration-699833 Clientid:01:52:54:00:e7:a7:4d}
	I1209 03:19:28.323185  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined IP address 192.168.50.113 and MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.323457  295238 main.go:143] libmachine: Using SSH client type: native
	I1209 03:19:28.323716  295238 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.50.113 22 <nil> <nil>}
	I1209 03:19:28.323728  295238 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-699833' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-699833/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-699833' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 03:19:28.448086  295238 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 03:19:28.448121  295238 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22081-254936/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-254936/.minikube}
	I1209 03:19:28.448149  295238 buildroot.go:174] setting up certificates
	I1209 03:19:28.448163  295238 provision.go:84] configureAuth start
	I1209 03:19:28.452071  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.452610  295238 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e7:a7:4d", ip: ""} in network mk-cert-expiration-699833: {Iface:virbr2 ExpiryTime:2025-12-09 04:15:44 +0000 UTC Type:0 Mac:52:54:00:e7:a7:4d Iaid: IPaddr:192.168.50.113 Prefix:24 Hostname:cert-expiration-699833 Clientid:01:52:54:00:e7:a7:4d}
	I1209 03:19:28.452631  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined IP address 192.168.50.113 and MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.455624  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.456017  295238 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e7:a7:4d", ip: ""} in network mk-cert-expiration-699833: {Iface:virbr2 ExpiryTime:2025-12-09 04:15:44 +0000 UTC Type:0 Mac:52:54:00:e7:a7:4d Iaid: IPaddr:192.168.50.113 Prefix:24 Hostname:cert-expiration-699833 Clientid:01:52:54:00:e7:a7:4d}
	I1209 03:19:28.456048  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined IP address 192.168.50.113 and MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.456197  295238 provision.go:143] copyHostCerts
	I1209 03:19:28.456281  295238 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-254936/.minikube/cert.pem, removing ...
	I1209 03:19:28.456290  295238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-254936/.minikube/cert.pem
	I1209 03:19:28.456370  295238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/cert.pem (1123 bytes)
	I1209 03:19:28.456474  295238 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-254936/.minikube/key.pem, removing ...
	I1209 03:19:28.456478  295238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-254936/.minikube/key.pem
	I1209 03:19:28.456499  295238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/key.pem (1679 bytes)
	I1209 03:19:28.456556  295238 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-254936/.minikube/ca.pem, removing ...
	I1209 03:19:28.456559  295238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-254936/.minikube/ca.pem
	I1209 03:19:28.456575  295238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/ca.pem (1078 bytes)
	I1209 03:19:28.456623  295238 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-699833 san=[127.0.0.1 192.168.50.113 cert-expiration-699833 localhost minikube]
	I1209 03:19:28.720780  295238 provision.go:177] copyRemoteCerts
	I1209 03:19:28.720840  295238 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 03:19:28.724308  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.724792  295238 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e7:a7:4d", ip: ""} in network mk-cert-expiration-699833: {Iface:virbr2 ExpiryTime:2025-12-09 04:15:44 +0000 UTC Type:0 Mac:52:54:00:e7:a7:4d Iaid: IPaddr:192.168.50.113 Prefix:24 Hostname:cert-expiration-699833 Clientid:01:52:54:00:e7:a7:4d}
	I1209 03:19:28.724811  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined IP address 192.168.50.113 and MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.724967  295238 sshutil.go:53] new ssh client: &{IP:192.168.50.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/cert-expiration-699833/id_rsa Username:docker}
	I1209 03:19:28.821073  295238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 03:19:28.858990  295238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 03:19:28.897536  295238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 03:19:28.940865  295238 provision.go:87] duration metric: took 492.684984ms to configureAuth
	I1209 03:19:28.940890  295238 buildroot.go:189] setting minikube options for container-runtime
	I1209 03:19:28.941086  295238 config.go:182] Loaded profile config "cert-expiration-699833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 03:19:28.944145  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.944606  295238 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e7:a7:4d", ip: ""} in network mk-cert-expiration-699833: {Iface:virbr2 ExpiryTime:2025-12-09 04:15:44 +0000 UTC Type:0 Mac:52:54:00:e7:a7:4d Iaid: IPaddr:192.168.50.113 Prefix:24 Hostname:cert-expiration-699833 Clientid:01:52:54:00:e7:a7:4d}
	I1209 03:19:28.944639  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined IP address 192.168.50.113 and MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:28.944821  295238 main.go:143] libmachine: Using SSH client type: native
	I1209 03:19:28.945144  295238 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.50.113 22 <nil> <nil>}
	I1209 03:19:28.945160  295238 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1209 03:19:29.653489  294888 pod_ready.go:104] pod "etcd-pause-739105" is not "Ready", error: <nil>
	W1209 03:19:32.154375  294888 pod_ready.go:104] pod "etcd-pause-739105" is not "Ready", error: <nil>
	I1209 03:19:32.654169  294888 pod_ready.go:94] pod "etcd-pause-739105" is "Ready"
	I1209 03:19:32.654205  294888 pod_ready.go:86] duration metric: took 11.508963618s for pod "etcd-pause-739105" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:32.658482  294888 pod_ready.go:83] waiting for pod "kube-apiserver-pause-739105" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:32.665456  294888 pod_ready.go:94] pod "kube-apiserver-pause-739105" is "Ready"
	I1209 03:19:32.665493  294888 pod_ready.go:86] duration metric: took 6.969874ms for pod "kube-apiserver-pause-739105" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:32.668999  294888 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-739105" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:32.677052  294888 pod_ready.go:94] pod "kube-controller-manager-pause-739105" is "Ready"
	I1209 03:19:32.677090  294888 pod_ready.go:86] duration metric: took 8.053977ms for pod "kube-controller-manager-pause-739105" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:32.679956  294888 pod_ready.go:83] waiting for pod "kube-proxy-rxfdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:32.850259  294888 pod_ready.go:94] pod "kube-proxy-rxfdq" is "Ready"
	I1209 03:19:32.850290  294888 pod_ready.go:86] duration metric: took 170.298804ms for pod "kube-proxy-rxfdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:33.050566  294888 pod_ready.go:83] waiting for pod "kube-scheduler-pause-739105" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:33.450718  294888 pod_ready.go:94] pod "kube-scheduler-pause-739105" is "Ready"
	I1209 03:19:33.450762  294888 pod_ready.go:86] duration metric: took 400.159535ms for pod "kube-scheduler-pause-739105" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 03:19:33.450782  294888 pod_ready.go:40] duration metric: took 12.320406455s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 03:19:33.510365  294888 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1209 03:19:33.512044  294888 out.go:179] * Done! kubectl is now configured to use "pause-739105" cluster and "default" namespace by default
	I1209 03:19:28.694991  291970 logs.go:123] Gathering logs for kube-apiserver [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb] ...
	I1209 03:19:28.695031  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:31.243987  291970 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I1209 03:19:31.244692  291970 api_server.go:269] stopped: https://192.168.39.194:8443/healthz: Get "https://192.168.39.194:8443/healthz": dial tcp 192.168.39.194:8443: connect: connection refused
	I1209 03:19:31.244781  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 03:19:31.244875  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 03:19:31.302919  291970 cri.go:89] found id: "9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:31.302950  291970 cri.go:89] found id: ""
	I1209 03:19:31.302961  291970 logs.go:282] 1 containers: [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb]
	I1209 03:19:31.303036  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:31.308153  291970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 03:19:31.308252  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 03:19:31.369995  291970 cri.go:89] found id: "ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:31.370023  291970 cri.go:89] found id: ""
	I1209 03:19:31.370034  291970 logs.go:282] 1 containers: [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38]
	I1209 03:19:31.370110  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:31.375556  291970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 03:19:31.375650  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 03:19:31.425376  291970 cri.go:89] found id: "02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:31.425410  291970 cri.go:89] found id: ""
	I1209 03:19:31.425422  291970 logs.go:282] 1 containers: [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb]
	I1209 03:19:31.425502  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:31.431172  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 03:19:31.431367  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 03:19:31.490166  291970 cri.go:89] found id: "252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:31.490195  291970 cri.go:89] found id: "3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:31.490201  291970 cri.go:89] found id: ""
	I1209 03:19:31.490210  291970 logs.go:282] 2 containers: [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81]
	I1209 03:19:31.490284  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:31.495223  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:31.499959  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 03:19:31.500043  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 03:19:31.545106  291970 cri.go:89] found id: "7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:31.545130  291970 cri.go:89] found id: ""
	I1209 03:19:31.545138  291970 logs.go:282] 1 containers: [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18]
	I1209 03:19:31.545201  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:31.550087  291970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 03:19:31.550163  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 03:19:31.592966  291970 cri.go:89] found id: "6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:31.592995  291970 cri.go:89] found id: ""
	I1209 03:19:31.593004  291970 logs.go:282] 1 containers: [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50]
	I1209 03:19:31.593064  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:31.599248  291970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 03:19:31.599329  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 03:19:31.649102  291970 cri.go:89] found id: ""
	I1209 03:19:31.649136  291970 logs.go:282] 0 containers: []
	W1209 03:19:31.649148  291970 logs.go:284] No container was found matching "kindnet"
	I1209 03:19:31.649156  291970 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 03:19:31.649230  291970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 03:19:31.700123  291970 cri.go:89] found id: "3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:31.700146  291970 cri.go:89] found id: ""
	I1209 03:19:31.700154  291970 logs.go:282] 1 containers: [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9]
	I1209 03:19:31.700211  291970 ssh_runner.go:195] Run: which crictl
	I1209 03:19:31.704852  291970 logs.go:123] Gathering logs for coredns [02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb] ...
	I1209 03:19:31.704885  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02160ce9511725e5319cf8fcbb466bf168f496a90cdba46f4addb4efa29ec1bb"
	I1209 03:19:31.746182  291970 logs.go:123] Gathering logs for kube-scheduler [3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81] ...
	I1209 03:19:31.746231  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b855f77976185ced5ed632182b0fa23ffb7e6442ef75979c062199798a51c81"
	I1209 03:19:31.804476  291970 logs.go:123] Gathering logs for kube-proxy [7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18] ...
	I1209 03:19:31.804521  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b6f0728f4843eb839254ece0137bee44f58201ea95af20fa984c24f726cdf18"
	I1209 03:19:31.861938  291970 logs.go:123] Gathering logs for CRI-O ...
	I1209 03:19:31.861979  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 03:19:32.189141  291970 logs.go:123] Gathering logs for container status ...
	I1209 03:19:32.189181  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:19:32.248192  291970 logs.go:123] Gathering logs for kube-apiserver [9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb] ...
	I1209 03:19:32.248223  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ef09c06860c83e860373da7e0d895bede0f4f8db3e1ff3c35d543b10a0e9dcb"
	I1209 03:19:32.299693  291970 logs.go:123] Gathering logs for etcd [ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38] ...
	I1209 03:19:32.299726  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae6ec6e11491f66c82506acbb8b454a85a1f58e86cb85d0a0351a7808cba2e38"
	I1209 03:19:32.354632  291970 logs.go:123] Gathering logs for kube-scheduler [252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8] ...
	I1209 03:19:32.354682  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 252044224af2b7f454917a0916fefdff2330e6ea863cde1529353e3ad7e932b8"
	I1209 03:19:32.444759  291970 logs.go:123] Gathering logs for kube-controller-manager [6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50] ...
	I1209 03:19:32.444802  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b5d193668a5ab7592ff8f7ab436996b27f5e761fb400d010c9d4aa34d526f50"
	I1209 03:19:32.491124  291970 logs.go:123] Gathering logs for storage-provisioner [3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9] ...
	I1209 03:19:32.491158  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ea67ab8e7a216bad4beb6da5cb38c2278f03d82a9ad1d8ebc2a177907f849c9"
	I1209 03:19:32.531800  291970 logs.go:123] Gathering logs for kubelet ...
	I1209 03:19:32.531851  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:19:32.651416  291970 logs.go:123] Gathering logs for dmesg ...
	I1209 03:19:32.651457  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:19:32.672034  291970 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:19:32.672081  291970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 03:19:32.777120  291970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 03:19:31.474891  295180 crio.go:462] duration metric: took 1.750127242s to copy over tarball
	I1209 03:19:31.475072  295180 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 03:19:34.304900  295180 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.829768798s)
	I1209 03:19:34.304938  295180 crio.go:469] duration metric: took 2.829997726s to extract the tarball
	I1209 03:19:34.304955  295180 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 03:19:34.357585  295180 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 03:19:34.413358  295180 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 03:19:34.413388  295180 cache_images.go:86] Images are preloaded, skipping loading
	I1209 03:19:34.413399  295180 kubeadm.go:935] updating node { 192.168.61.28 8443 v1.32.0 crio true true} ...
	I1209 03:19:34.413522  295180 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=stopped-upgrade-644254 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-644254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 03:19:34.413607  295180 ssh_runner.go:195] Run: crio config
	I1209 03:19:34.477654  295180 cni.go:84] Creating CNI manager for ""
	I1209 03:19:34.477694  295180 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 03:19:34.477711  295180 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 03:19:34.477742  295180 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.28 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-644254 NodeName:stopped-upgrade-644254 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 03:19:34.480279  295180 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-644254"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.28"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.28"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 03:19:34.480386  295180 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1209 03:19:34.495072  295180 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 03:19:34.495149  295180 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 03:19:34.511203  295180 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1209 03:19:34.533253  295180 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 03:19:34.556623  295180 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1209 03:19:34.581081  295180 ssh_runner.go:195] Run: grep 192.168.61.28	control-plane.minikube.internal$ /etc/hosts
	I1209 03:19:34.587090  295180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 03:19:34.602041  295180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:19:34.738006  295180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 03:19:34.757639  295180 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/stopped-upgrade-644254 for IP: 192.168.61.28
	I1209 03:19:34.757669  295180 certs.go:195] generating shared ca certs ...
	I1209 03:19:34.757692  295180 certs.go:227] acquiring lock for ca certs: {Name:mk538e8c05758246ce904354c7e7ace78887d181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:19:34.757906  295180 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-254936/.minikube/ca.key
	I1209 03:19:34.757978  295180 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.key
	I1209 03:19:34.757992  295180 certs.go:257] generating profile certs ...
	I1209 03:19:34.758121  295180 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/stopped-upgrade-644254/client.key
	I1209 03:19:34.758206  295180 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/stopped-upgrade-644254/apiserver.key.8b0cdb6f
	I1209 03:19:34.758262  295180 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/stopped-upgrade-644254/proxy-client.key
	I1209 03:19:34.758409  295180 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/258854.pem (1338 bytes)
	W1209 03:19:34.758453  295180 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-254936/.minikube/certs/258854_empty.pem, impossibly tiny 0 bytes
	I1209 03:19:34.758466  295180 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 03:19:34.758502  295180 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem (1078 bytes)
	I1209 03:19:34.758540  295180 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem (1123 bytes)
	I1209 03:19:34.758571  295180 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/key.pem (1679 bytes)
	I1209 03:19:34.758632  295180 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/files/etc/ssl/certs/2588542.pem (1708 bytes)
	I1209 03:19:34.759489  295180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 03:19:34.818163  295180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 03:19:34.848287  295180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 03:19:34.880025  295180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 03:19:34.909314  295180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/stopped-upgrade-644254/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 03:19:34.936232  295180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/stopped-upgrade-644254/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 03:19:34.964896  295180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/stopped-upgrade-644254/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 03:19:34.997283  295180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/stopped-upgrade-644254/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 03:19:35.631007  295238 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 03:19:35.631032  295238 machine.go:97] duration metric: took 7.593482169s to provisionDockerMachine
	I1209 03:19:35.631046  295238 start.go:293] postStartSetup for "cert-expiration-699833" (driver="kvm2")
	I1209 03:19:35.631077  295238 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 03:19:35.631223  295238 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 03:19:35.634875  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:35.635257  295238 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e7:a7:4d", ip: ""} in network mk-cert-expiration-699833: {Iface:virbr2 ExpiryTime:2025-12-09 04:15:44 +0000 UTC Type:0 Mac:52:54:00:e7:a7:4d Iaid: IPaddr:192.168.50.113 Prefix:24 Hostname:cert-expiration-699833 Clientid:01:52:54:00:e7:a7:4d}
	I1209 03:19:35.635275  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined IP address 192.168.50.113 and MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:35.635456  295238 sshutil.go:53] new ssh client: &{IP:192.168.50.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/cert-expiration-699833/id_rsa Username:docker}
	I1209 03:19:35.731684  295238 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 03:19:35.737579  295238 info.go:137] Remote host: Buildroot 2025.02
	I1209 03:19:35.737601  295238 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-254936/.minikube/addons for local assets ...
	I1209 03:19:35.737667  295238 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-254936/.minikube/files for local assets ...
	I1209 03:19:35.737754  295238 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-254936/.minikube/files/etc/ssl/certs/2588542.pem -> 2588542.pem in /etc/ssl/certs
	I1209 03:19:35.737900  295238 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 03:19:35.755414  295238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/files/etc/ssl/certs/2588542.pem --> /etc/ssl/certs/2588542.pem (1708 bytes)
	I1209 03:19:35.799919  295238 start.go:296] duration metric: took 168.8331ms for postStartSetup
	I1209 03:19:35.799974  295238 fix.go:56] duration metric: took 7.766748928s for fixHost
	I1209 03:19:35.804663  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:35.805399  295238 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e7:a7:4d", ip: ""} in network mk-cert-expiration-699833: {Iface:virbr2 ExpiryTime:2025-12-09 04:15:44 +0000 UTC Type:0 Mac:52:54:00:e7:a7:4d Iaid: IPaddr:192.168.50.113 Prefix:24 Hostname:cert-expiration-699833 Clientid:01:52:54:00:e7:a7:4d}
	I1209 03:19:35.805429  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined IP address 192.168.50.113 and MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:35.805761  295238 main.go:143] libmachine: Using SSH client type: native
	I1209 03:19:35.806137  295238 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil>  [] 0s} 192.168.50.113 22 <nil> <nil>}
	I1209 03:19:35.806147  295238 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1209 03:19:35.928692  295238 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765250375.920566596
	
	I1209 03:19:35.928710  295238 fix.go:216] guest clock: 1765250375.920566596
	I1209 03:19:35.928720  295238 fix.go:229] Guest: 2025-12-09 03:19:35.920566596 +0000 UTC Remote: 2025-12-09 03:19:35.799978849 +0000 UTC m=+24.706647374 (delta=120.587747ms)
	I1209 03:19:35.928741  295238 fix.go:200] guest clock delta is within tolerance: 120.587747ms
	I1209 03:19:35.928747  295238 start.go:83] releasing machines lock for "cert-expiration-699833", held for 7.895553489s
	I1209 03:19:35.931752  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:35.932360  295238 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e7:a7:4d", ip: ""} in network mk-cert-expiration-699833: {Iface:virbr2 ExpiryTime:2025-12-09 04:15:44 +0000 UTC Type:0 Mac:52:54:00:e7:a7:4d Iaid: IPaddr:192.168.50.113 Prefix:24 Hostname:cert-expiration-699833 Clientid:01:52:54:00:e7:a7:4d}
	I1209 03:19:35.932387  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined IP address 192.168.50.113 and MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:35.933127  295238 ssh_runner.go:195] Run: cat /version.json
	I1209 03:19:35.933185  295238 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 03:19:35.936302  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:35.936397  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:35.936732  295238 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e7:a7:4d", ip: ""} in network mk-cert-expiration-699833: {Iface:virbr2 ExpiryTime:2025-12-09 04:15:44 +0000 UTC Type:0 Mac:52:54:00:e7:a7:4d Iaid: IPaddr:192.168.50.113 Prefix:24 Hostname:cert-expiration-699833 Clientid:01:52:54:00:e7:a7:4d}
	I1209 03:19:35.936749  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined IP address 192.168.50.113 and MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:35.936886  295238 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e7:a7:4d", ip: ""} in network mk-cert-expiration-699833: {Iface:virbr2 ExpiryTime:2025-12-09 04:15:44 +0000 UTC Type:0 Mac:52:54:00:e7:a7:4d Iaid: IPaddr:192.168.50.113 Prefix:24 Hostname:cert-expiration-699833 Clientid:01:52:54:00:e7:a7:4d}
	I1209 03:19:35.936917  295238 sshutil.go:53] new ssh client: &{IP:192.168.50.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/cert-expiration-699833/id_rsa Username:docker}
	I1209 03:19:35.936924  295238 main.go:143] libmachine: domain cert-expiration-699833 has defined IP address 192.168.50.113 and MAC address 52:54:00:e7:a7:4d in network mk-cert-expiration-699833
	I1209 03:19:35.937148  295238 sshutil.go:53] new ssh client: &{IP:192.168.50.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/cert-expiration-699833/id_rsa Username:docker}
	I1209 03:19:36.056935  295238 ssh_runner.go:195] Run: systemctl --version
	I1209 03:19:36.066648  295238 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	
	
	==> CRI-O <==
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.131043457Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765250378130952529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e4fe3fc-6bb9-49bc-b36b-75cdcab93c34 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.132436992Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2d6a83f-6e80-4623-947a-0056275a2dd7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.132516845Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2d6a83f-6e80-4623-947a-0056275a2dd7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.133100777Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c8e068718377187d3b4b28e5adbf9015357aa760172aa9183c59e09e14d2968b,PodSandboxId:482099d32a99b56188357804fae68331d845e1c8a808e83ddd7cabdb87e9dfbe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765250359017539352,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxfdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6d4576-8e92-4abd-8193-d8b9ddd7266d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc752c9728b3b332d395aa59842764f04d4caa40df28f144b683004557327f2,PodSandboxId:c55a751d02827c0d0f3788a23c6c21ce67c22da8309334cc84c158afcb60b96e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765250355273083336,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e02b020db785bb074900f6975d3f97,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":1025
7,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4a0bac8a98646d5c29d58b036502fcf131af10f10fa52995741cab94b0da2a1,PodSandboxId:0f8213b341c4e30564aa9565d039e4258055d12695daac0b882de811e7131527,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765250355249459598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0bbece84dc5cd4f438fa4e2374c1ea,},Annotations:map[string]string{io.kuber
netes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbbae70353466efdc8393fadfdacf6a86580e99a587163778a008b33062df1d6,PodSandboxId:0ac8be03669c39343dfef2cd9ebf9e0f8ac28cfebe2b68a710e6c8cba13e8c3b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765250355211668801,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739105,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e199ec6a0c235ca384a8aa2beb74b691,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb0e6d605cf6597e05179b758b729785cd27679b39c9bd63286c60feb85c8bf,PodSandboxId:68ee0f3f8016a7d9d60641a347822503a1249f2f0e4c89ba97376f34db491f1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765250355196871886,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bae2024872c09e08f41c6347be77829b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181d9c9b69dd99c98ab2b6e3a5cafeeeaf2e38e62616b8475b3d33316dd1944,PodSandboxId:c4989bf8fc2b1d4911c4a37cb3968b828e9b97ec217f1a3354e1973ace713fba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17652
50338139497040,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pt698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79e9e39-615a-4e96-afd4-3b7e856cc3f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ae140d28849706ed992a697e3d674291e73d88d78d98be55c0ec624c7bfa79,PodSandboxId:482099d32a99b56188357804fae68331d845e1c8a808e8
3ddd7cabdb87e9dfbe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765250337055678970,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxfdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6d4576-8e92-4abd-8193-d8b9ddd7266d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7292ca8714180c12d0930642833f614c66bde369219707289a884965f99ae3,PodSandboxId:0f8213b341c4e30564aa9565d039e4258055d12695daac0b882de811e7131527,Metadata:&Conta
inerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765250336964797509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0bbece84dc5cd4f438fa4e2374c1ea,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e129d74e7c8c6290797c64aa8bec8b1aff4105e7c26698b9f7ab
a0e9db897f,PodSandboxId:0ac8be03669c39343dfef2cd9ebf9e0f8ac28cfebe2b68a710e6c8cba13e8c3b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765250336959936499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e199ec6a0c235ca384a8aa2beb74b691,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:4f488569925dc632f4dd1ca1b39df71b05e838a45e325da6c9ccc93996ce130c,PodSandboxId:68ee0f3f8016a7d9d60641a347822503a1249f2f0e4c89ba97376f34db491f1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765250336915222577,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bae2024872c09e08f41c6347be77829b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:026aaeb74336572baaaf4524f3103e9e18100483e64abfe1932d87eab57a79dc,PodSandboxId:c55a751d02827c0d0f3788a23c6c21ce67c22da8309334cc84c158afcb60b96e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765250336877876907,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e02b020db785bb074900f6975d3f97,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14444bc2d3af3e65b99e16ca7ed91add6e58b282592f24742c4b426c0748bf39,PodSandboxId:ebfcc6a20b38cdbb939e5982f88a8c4c79b0f242846aab771d33ab22e6261517,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765250280334320922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pt698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79e9e39-615a-4e96-afd4-3b7e856cc3f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2d6a83f-6e80-4623-947a-0056275a2dd7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.189876861Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=72257d30-d632-4ade-adf6-613e76063787 name=/runtime.v1.RuntimeService/Version
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.189945768Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=72257d30-d632-4ade-adf6-613e76063787 name=/runtime.v1.RuntimeService/Version
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.191365488Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c68a23c-6fff-4880-bda4-5c2aa8d61596 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.193354651Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765250378193317949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c68a23c-6fff-4880-bda4-5c2aa8d61596 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.195316920Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d3fee60-ed4e-4464-899c-54b7fb4697e4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.195399547Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d3fee60-ed4e-4464-899c-54b7fb4697e4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.195877512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c8e068718377187d3b4b28e5adbf9015357aa760172aa9183c59e09e14d2968b,PodSandboxId:482099d32a99b56188357804fae68331d845e1c8a808e83ddd7cabdb87e9dfbe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765250359017539352,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxfdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6d4576-8e92-4abd-8193-d8b9ddd7266d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc752c9728b3b332d395aa59842764f04d4caa40df28f144b683004557327f2,PodSandboxId:c55a751d02827c0d0f3788a23c6c21ce67c22da8309334cc84c158afcb60b96e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765250355273083336,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e02b020db785bb074900f6975d3f97,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":1025
7,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4a0bac8a98646d5c29d58b036502fcf131af10f10fa52995741cab94b0da2a1,PodSandboxId:0f8213b341c4e30564aa9565d039e4258055d12695daac0b882de811e7131527,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765250355249459598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0bbece84dc5cd4f438fa4e2374c1ea,},Annotations:map[string]string{io.kuber
netes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbbae70353466efdc8393fadfdacf6a86580e99a587163778a008b33062df1d6,PodSandboxId:0ac8be03669c39343dfef2cd9ebf9e0f8ac28cfebe2b68a710e6c8cba13e8c3b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765250355211668801,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739105,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e199ec6a0c235ca384a8aa2beb74b691,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb0e6d605cf6597e05179b758b729785cd27679b39c9bd63286c60feb85c8bf,PodSandboxId:68ee0f3f8016a7d9d60641a347822503a1249f2f0e4c89ba97376f34db491f1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765250355196871886,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bae2024872c09e08f41c6347be77829b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181d9c9b69dd99c98ab2b6e3a5cafeeeaf2e38e62616b8475b3d33316dd1944,PodSandboxId:c4989bf8fc2b1d4911c4a37cb3968b828e9b97ec217f1a3354e1973ace713fba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17652
50338139497040,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pt698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79e9e39-615a-4e96-afd4-3b7e856cc3f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ae140d28849706ed992a697e3d674291e73d88d78d98be55c0ec624c7bfa79,PodSandboxId:482099d32a99b56188357804fae68331d845e1c8a808e8
3ddd7cabdb87e9dfbe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765250337055678970,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxfdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6d4576-8e92-4abd-8193-d8b9ddd7266d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7292ca8714180c12d0930642833f614c66bde369219707289a884965f99ae3,PodSandboxId:0f8213b341c4e30564aa9565d039e4258055d12695daac0b882de811e7131527,Metadata:&Conta
inerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765250336964797509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0bbece84dc5cd4f438fa4e2374c1ea,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e129d74e7c8c6290797c64aa8bec8b1aff4105e7c26698b9f7ab
a0e9db897f,PodSandboxId:0ac8be03669c39343dfef2cd9ebf9e0f8ac28cfebe2b68a710e6c8cba13e8c3b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765250336959936499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e199ec6a0c235ca384a8aa2beb74b691,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:4f488569925dc632f4dd1ca1b39df71b05e838a45e325da6c9ccc93996ce130c,PodSandboxId:68ee0f3f8016a7d9d60641a347822503a1249f2f0e4c89ba97376f34db491f1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765250336915222577,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bae2024872c09e08f41c6347be77829b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:026aaeb74336572baaaf4524f3103e9e18100483e64abfe1932d87eab57a79dc,PodSandboxId:c55a751d02827c0d0f3788a23c6c21ce67c22da8309334cc84c158afcb60b96e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765250336877876907,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e02b020db785bb074900f6975d3f97,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14444bc2d3af3e65b99e16ca7ed91add6e58b282592f24742c4b426c0748bf39,PodSandboxId:ebfcc6a20b38cdbb939e5982f88a8c4c79b0f242846aab771d33ab22e6261517,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765250280334320922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pt698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79e9e39-615a-4e96-afd4-3b7e856cc3f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d3fee60-ed4e-4464-899c-54b7fb4697e4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.254923071Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe52e384-bff5-4d6d-ad85-6ec224abfe32 name=/runtime.v1.RuntimeService/Version
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.254993384Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe52e384-bff5-4d6d-ad85-6ec224abfe32 name=/runtime.v1.RuntimeService/Version
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.256794297Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2fc53f8-4831-4c22-a1e9-5dd013bd1b49 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.257390317Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765250378257344210,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2fc53f8-4831-4c22-a1e9-5dd013bd1b49 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.258583959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba52843f-3e37-42f5-96e9-e055476ec904 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.258908403Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba52843f-3e37-42f5-96e9-e055476ec904 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.259428880Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c8e068718377187d3b4b28e5adbf9015357aa760172aa9183c59e09e14d2968b,PodSandboxId:482099d32a99b56188357804fae68331d845e1c8a808e83ddd7cabdb87e9dfbe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765250359017539352,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxfdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6d4576-8e92-4abd-8193-d8b9ddd7266d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc752c9728b3b332d395aa59842764f04d4caa40df28f144b683004557327f2,PodSandboxId:c55a751d02827c0d0f3788a23c6c21ce67c22da8309334cc84c158afcb60b96e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765250355273083336,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e02b020db785bb074900f6975d3f97,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":1025
7,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4a0bac8a98646d5c29d58b036502fcf131af10f10fa52995741cab94b0da2a1,PodSandboxId:0f8213b341c4e30564aa9565d039e4258055d12695daac0b882de811e7131527,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765250355249459598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0bbece84dc5cd4f438fa4e2374c1ea,},Annotations:map[string]string{io.kuber
netes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbbae70353466efdc8393fadfdacf6a86580e99a587163778a008b33062df1d6,PodSandboxId:0ac8be03669c39343dfef2cd9ebf9e0f8ac28cfebe2b68a710e6c8cba13e8c3b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765250355211668801,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739105,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e199ec6a0c235ca384a8aa2beb74b691,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb0e6d605cf6597e05179b758b729785cd27679b39c9bd63286c60feb85c8bf,PodSandboxId:68ee0f3f8016a7d9d60641a347822503a1249f2f0e4c89ba97376f34db491f1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765250355196871886,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bae2024872c09e08f41c6347be77829b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181d9c9b69dd99c98ab2b6e3a5cafeeeaf2e38e62616b8475b3d33316dd1944,PodSandboxId:c4989bf8fc2b1d4911c4a37cb3968b828e9b97ec217f1a3354e1973ace713fba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17652
50338139497040,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pt698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79e9e39-615a-4e96-afd4-3b7e856cc3f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ae140d28849706ed992a697e3d674291e73d88d78d98be55c0ec624c7bfa79,PodSandboxId:482099d32a99b56188357804fae68331d845e1c8a808e8
3ddd7cabdb87e9dfbe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765250337055678970,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxfdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6d4576-8e92-4abd-8193-d8b9ddd7266d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7292ca8714180c12d0930642833f614c66bde369219707289a884965f99ae3,PodSandboxId:0f8213b341c4e30564aa9565d039e4258055d12695daac0b882de811e7131527,Metadata:&Conta
inerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765250336964797509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0bbece84dc5cd4f438fa4e2374c1ea,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e129d74e7c8c6290797c64aa8bec8b1aff4105e7c26698b9f7ab
a0e9db897f,PodSandboxId:0ac8be03669c39343dfef2cd9ebf9e0f8ac28cfebe2b68a710e6c8cba13e8c3b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765250336959936499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e199ec6a0c235ca384a8aa2beb74b691,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:4f488569925dc632f4dd1ca1b39df71b05e838a45e325da6c9ccc93996ce130c,PodSandboxId:68ee0f3f8016a7d9d60641a347822503a1249f2f0e4c89ba97376f34db491f1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765250336915222577,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bae2024872c09e08f41c6347be77829b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:026aaeb74336572baaaf4524f3103e9e18100483e64abfe1932d87eab57a79dc,PodSandboxId:c55a751d02827c0d0f3788a23c6c21ce67c22da8309334cc84c158afcb60b96e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765250336877876907,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e02b020db785bb074900f6975d3f97,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14444bc2d3af3e65b99e16ca7ed91add6e58b282592f24742c4b426c0748bf39,PodSandboxId:ebfcc6a20b38cdbb939e5982f88a8c4c79b0f242846aab771d33ab22e6261517,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765250280334320922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pt698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79e9e39-615a-4e96-afd4-3b7e856cc3f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba52843f-3e37-42f5-96e9-e055476ec904 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.311267970Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=60fe6bd1-55e5-42a5-b897-d6cfffecf539 name=/runtime.v1.RuntimeService/Version
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.311366108Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=60fe6bd1-55e5-42a5-b897-d6cfffecf539 name=/runtime.v1.RuntimeService/Version
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.313265838Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=447d0ddd-fbca-4cdf-9dd9-f2dbd6c518b0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.313650462Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765250378313627914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=447d0ddd-fbca-4cdf-9dd9-f2dbd6c518b0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.315209664Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bf5d1c7-c7af-46f1-8fa2-6a01d6e330e6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.315309091Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bf5d1c7-c7af-46f1-8fa2-6a01d6e330e6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 03:19:38 pause-739105 crio[2827]: time="2025-12-09 03:19:38.315681617Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c8e068718377187d3b4b28e5adbf9015357aa760172aa9183c59e09e14d2968b,PodSandboxId:482099d32a99b56188357804fae68331d845e1c8a808e83ddd7cabdb87e9dfbe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765250359017539352,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxfdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6d4576-8e92-4abd-8193-d8b9ddd7266d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc752c9728b3b332d395aa59842764f04d4caa40df28f144b683004557327f2,PodSandboxId:c55a751d02827c0d0f3788a23c6c21ce67c22da8309334cc84c158afcb60b96e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765250355273083336,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e02b020db785bb074900f6975d3f97,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":1025
7,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4a0bac8a98646d5c29d58b036502fcf131af10f10fa52995741cab94b0da2a1,PodSandboxId:0f8213b341c4e30564aa9565d039e4258055d12695daac0b882de811e7131527,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765250355249459598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0bbece84dc5cd4f438fa4e2374c1ea,},Annotations:map[string]string{io.kuber
netes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbbae70353466efdc8393fadfdacf6a86580e99a587163778a008b33062df1d6,PodSandboxId:0ac8be03669c39343dfef2cd9ebf9e0f8ac28cfebe2b68a710e6c8cba13e8c3b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765250355211668801,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739105,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e199ec6a0c235ca384a8aa2beb74b691,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb0e6d605cf6597e05179b758b729785cd27679b39c9bd63286c60feb85c8bf,PodSandboxId:68ee0f3f8016a7d9d60641a347822503a1249f2f0e4c89ba97376f34db491f1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765250355196871886,Labels:map[string]string{io.kubernetes.container.name: etcd,i
o.kubernetes.pod.name: etcd-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bae2024872c09e08f41c6347be77829b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181d9c9b69dd99c98ab2b6e3a5cafeeeaf2e38e62616b8475b3d33316dd1944,PodSandboxId:c4989bf8fc2b1d4911c4a37cb3968b828e9b97ec217f1a3354e1973ace713fba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17652
50338139497040,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pt698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79e9e39-615a-4e96-afd4-3b7e856cc3f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ae140d28849706ed992a697e3d674291e73d88d78d98be55c0ec624c7bfa79,PodSandboxId:482099d32a99b56188357804fae68331d845e1c8a808e8
3ddd7cabdb87e9dfbe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765250337055678970,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rxfdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6d4576-8e92-4abd-8193-d8b9ddd7266d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7292ca8714180c12d0930642833f614c66bde369219707289a884965f99ae3,PodSandboxId:0f8213b341c4e30564aa9565d039e4258055d12695daac0b882de811e7131527,Metadata:&Conta
inerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765250336964797509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce0bbece84dc5cd4f438fa4e2374c1ea,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e129d74e7c8c6290797c64aa8bec8b1aff4105e7c26698b9f7ab
a0e9db897f,PodSandboxId:0ac8be03669c39343dfef2cd9ebf9e0f8ac28cfebe2b68a710e6c8cba13e8c3b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765250336959936499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e199ec6a0c235ca384a8aa2beb74b691,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:4f488569925dc632f4dd1ca1b39df71b05e838a45e325da6c9ccc93996ce130c,PodSandboxId:68ee0f3f8016a7d9d60641a347822503a1249f2f0e4c89ba97376f34db491f1a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765250336915222577,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bae2024872c09e08f41c6347be77829b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:026aaeb74336572baaaf4524f3103e9e18100483e64abfe1932d87eab57a79dc,PodSandboxId:c55a751d02827c0d0f3788a23c6c21ce67c22da8309334cc84c158afcb60b96e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765250336877876907,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739105,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e02b020db785bb074900f6975d3f97,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257
,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14444bc2d3af3e65b99e16ca7ed91add6e58b282592f24742c4b426c0748bf39,PodSandboxId:ebfcc6a20b38cdbb939e5982f88a8c4c79b0f242846aab771d33ab22e6261517,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765250280334320922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pt698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79e9e39-615a-4e96-afd4-3b7e856cc3f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8bf5d1c7-c7af-46f1-8fa2-6a01d6e330e6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	c8e0687183771       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   19 seconds ago       Running             kube-proxy                2                   482099d32a99b       kube-proxy-rxfdq                       kube-system
	bfc752c9728b3       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   23 seconds ago       Running             kube-controller-manager   2                   c55a751d02827       kube-controller-manager-pause-739105   kube-system
	a4a0bac8a9864       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   23 seconds ago       Running             kube-scheduler            2                   0f8213b341c4e       kube-scheduler-pause-739105            kube-system
	dbbae70353466       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   23 seconds ago       Running             kube-apiserver            2                   0ac8be03669c3       kube-apiserver-pause-739105            kube-system
	cfb0e6d605cf6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   23 seconds ago       Running             etcd                      2                   68ee0f3f8016a       etcd-pause-739105                      kube-system
	6181d9c9b69dd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   40 seconds ago       Running             coredns                   1                   c4989bf8fc2b1       coredns-66bc5c9577-pt698               kube-system
	a7ae140d28849       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   41 seconds ago       Exited              kube-proxy                1                   482099d32a99b       kube-proxy-rxfdq                       kube-system
	de7292ca87141       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   41 seconds ago       Exited              kube-scheduler            1                   0f8213b341c4e       kube-scheduler-pause-739105            kube-system
	d4e129d74e7c8       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   41 seconds ago       Exited              kube-apiserver            1                   0ac8be03669c3       kube-apiserver-pause-739105            kube-system
	4f488569925dc       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   41 seconds ago       Exited              etcd                      1                   68ee0f3f8016a       etcd-pause-739105                      kube-system
	026aaeb743365       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   41 seconds ago       Exited              kube-controller-manager   1                   c55a751d02827       kube-controller-manager-pause-739105   kube-system
	14444bc2d3af3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   ebfcc6a20b38c       coredns-66bc5c9577-pt698               kube-system
	
	
	==> coredns [14444bc2d3af3e65b99e16ca7ed91add6e58b282592f24742c4b426c0748bf39] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	[INFO] Reloading complete
	[INFO] 127.0.0.1:52019 - 1662 "HINFO IN 7246317530562464401.1012150286828863301. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.035918379s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6181d9c9b69dd99c98ab2b6e3a5cafeeeaf2e38e62616b8475b3d33316dd1944] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42732 - 64164 "HINFO IN 4463350708249044738.2882555661380039538. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.043994581s
	
	
	==> describe nodes <==
	Name:               pause-739105
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-739105
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=pause-739105
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T03_17_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 03:17:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-739105
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 03:19:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 03:19:18 +0000   Tue, 09 Dec 2025 03:17:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 03:19:18 +0000   Tue, 09 Dec 2025 03:17:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 03:19:18 +0000   Tue, 09 Dec 2025 03:17:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 03:19:18 +0000   Tue, 09 Dec 2025 03:17:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.124
	  Hostname:    pause-739105
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 692518152e8f47e4868b516930bda7b7
	  System UUID:                69251815-2e8f-47e4-868b-516930bda7b7
	  Boot ID:                    9cec8cfc-af96-40f1-a394-16001a213c66
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-pt698                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     100s
	  kube-system                 etcd-pause-739105                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         106s
	  kube-system                 kube-apiserver-pause-739105             250m (12%)    0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-controller-manager-pause-739105    200m (10%)    0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-proxy-rxfdq                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-scheduler-pause-739105             100m (5%)     0 (0%)      0 (0%)           0 (0%)         106s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 98s                  kube-proxy       
	  Normal  Starting                 19s                  kube-proxy       
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  114s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  113s (x8 over 114s)  kubelet          Node pause-739105 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x8 over 114s)  kubelet          Node pause-739105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x7 over 114s)  kubelet          Node pause-739105 status is now: NodeHasSufficientPID
	  Normal  Starting                 106s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  106s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    105s                 kubelet          Node pause-739105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s                 kubelet          Node pause-739105 status is now: NodeHasSufficientPID
	  Normal  NodeReady                105s                 kubelet          Node pause-739105 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  105s                 kubelet          Node pause-739105 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           101s                 node-controller  Node pause-739105 event: Registered Node pause-739105 in Controller
	  Normal  Starting                 24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)    kubelet          Node pause-739105 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)    kubelet          Node pause-739105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)    kubelet          Node pause-739105 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                  node-controller  Node pause-739105 event: Registered Node pause-739105 in Controller
	
	
	==> dmesg <==
	[Dec 9 03:17] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001643] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000418] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.212541] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.101678] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.127671] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.115619] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.143368] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.028333] kauditd_printk_skb: 18 callbacks suppressed
	[Dec 9 03:18] kauditd_printk_skb: 219 callbacks suppressed
	[ +26.707237] kauditd_printk_skb: 38 callbacks suppressed
	[Dec 9 03:19] kauditd_printk_skb: 320 callbacks suppressed
	[  +4.514413] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [4f488569925dc632f4dd1ca1b39df71b05e838a45e325da6c9ccc93996ce130c] <==
	{"level":"warn","ts":"2025-12-09T03:19:00.754551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:00.777009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:00.787207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:00.796160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:00.821160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:00.854825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:00.920101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36194","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-09T03:19:12.017493Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-09T03:19:12.017581Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-739105","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.124:2380"],"advertise-client-urls":["https://192.168.72.124:2379"]}
	{"level":"error","ts":"2025-12-09T03:19:12.017685Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-09T03:19:12.017821Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-09T03:19:12.019961Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T03:19:12.020023Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b73f48baf02853d8","current-leader-member-id":"b73f48baf02853d8"}
	{"level":"info","ts":"2025-12-09T03:19:12.020104Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-09T03:19:12.020140Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-09T03:19:12.020566Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.124:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-09T03:19:12.020645Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.124:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-09T03:19:12.020658Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.124:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-09T03:19:12.020466Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-09T03:19:12.020680Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-09T03:19:12.020688Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T03:19:12.025067Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.72.124:2380"}
	{"level":"error","ts":"2025-12-09T03:19:12.025175Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.124:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T03:19:12.025202Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.72.124:2380"}
	{"level":"info","ts":"2025-12-09T03:19:12.025210Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-739105","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.124:2380"],"advertise-client-urls":["https://192.168.72.124:2379"]}
	
	
	==> etcd [cfb0e6d605cf6597e05179b758b729785cd27679b39c9bd63286c60feb85c8bf] <==
	{"level":"warn","ts":"2025-12-09T03:19:17.345426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.364398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.405838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.432369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.444793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.485005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.502468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.529459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.560805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.570669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.585053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.597837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.607879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.618368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.627437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.641331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.658596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.672999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.685149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.694066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.711672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.725669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.750751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.756996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T03:19:17.865269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35446","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:19:38 up 2 min,  0 users,  load average: 2.18, 0.79, 0.29
	Linux pause-739105 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [d4e129d74e7c8c6290797c64aa8bec8b1aff4105e7c26698b9f7aba0e9db897f] <==
	I1209 03:19:01.910156       1 controller.go:176] quota evaluator worker shutdown
	I1209 03:19:01.910161       1 controller.go:176] quota evaluator worker shutdown
	I1209 03:19:01.910264       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1209 03:19:01.912920       1 repairip.go:246] Shutting down ipallocator-repair-controller
	I1209 03:19:01.913286       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1209 03:19:02.536493       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1209 03:19:02.536536       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1209 03:19:03.535637       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1209 03:19:03.535914       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W1209 03:19:04.536072       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1209 03:19:04.536405       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1209 03:19:05.535547       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1209 03:19:05.536506       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1209 03:19:06.535581       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1209 03:19:06.536185       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1209 03:19:07.536530       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1209 03:19:07.537079       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	E1209 03:19:08.536455       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1209 03:19:08.536602       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W1209 03:19:09.535559       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1209 03:19:09.535651       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1209 03:19:10.536166       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1209 03:19:10.536171       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1209 03:19:11.535831       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1209 03:19:11.536543       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-apiserver [dbbae70353466efdc8393fadfdacf6a86580e99a587163778a008b33062df1d6] <==
	I1209 03:19:18.686374       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1209 03:19:18.686392       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1209 03:19:18.687516       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1209 03:19:18.687580       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1209 03:19:18.690749       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1209 03:19:18.690787       1 policy_source.go:240] refreshing policies
	I1209 03:19:18.686186       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1209 03:19:18.690975       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1209 03:19:18.691058       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1209 03:19:18.691104       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1209 03:19:18.695981       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1209 03:19:18.696078       1 aggregator.go:171] initial CRD sync complete...
	I1209 03:19:18.696087       1 autoregister_controller.go:144] Starting autoregister controller
	I1209 03:19:18.696094       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1209 03:19:18.696098       1 cache.go:39] Caches are synced for autoregister controller
	I1209 03:19:18.696144       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1209 03:19:18.715785       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 03:19:18.776217       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 03:19:19.493606       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 03:19:20.541948       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 03:19:20.630928       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1209 03:19:20.685882       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 03:19:20.695458       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 03:19:22.128699       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 03:19:22.288422       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [026aaeb74336572baaaf4524f3103e9e18100483e64abfe1932d87eab57a79dc] <==
	I1209 03:18:59.048355       1 serving.go:386] Generated self-signed cert in-memory
	I1209 03:19:00.563608       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1209 03:19:00.563665       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 03:19:00.567311       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1209 03:19:00.567477       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1209 03:19:00.569854       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 03:19:00.569797       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	E1209 03:19:11.566839       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.72.124:8443/healthz\": dial tcp 192.168.72.124:8443: connect: connection refused"
	
	
	==> kube-controller-manager [bfc752c9728b3b332d395aa59842764f04d4caa40df28f144b683004557327f2] <==
	I1209 03:19:22.022038       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1209 03:19:22.022532       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1209 03:19:22.023466       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1209 03:19:22.024364       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1209 03:19:22.024460       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 03:19:22.024505       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1209 03:19:22.026969       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1209 03:19:22.031454       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1209 03:19:22.037683       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1209 03:19:22.040490       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1209 03:19:22.040557       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1209 03:19:22.047236       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1209 03:19:22.047250       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 03:19:22.047287       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1209 03:19:22.047297       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1209 03:19:22.050582       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1209 03:19:22.053383       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1209 03:19:22.054094       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1209 03:19:22.068632       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1209 03:19:22.075344       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1209 03:19:22.086673       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 03:19:22.087895       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1209 03:19:22.088021       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1209 03:19:22.088083       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-739105"
	I1209 03:19:22.088164       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [a7ae140d28849706ed992a697e3d674291e73d88d78d98be55c0ec624c7bfa79] <==
	I1209 03:18:58.186656       1 server_linux.go:53] "Using iptables proxy"
	
	
	==> kube-proxy [c8e068718377187d3b4b28e5adbf9015357aa760172aa9183c59e09e14d2968b] <==
	I1209 03:19:19.233394       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 03:19:19.334604       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 03:19:19.334678       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.124"]
	E1209 03:19:19.334846       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 03:19:19.383686       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1209 03:19:19.383841       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 03:19:19.383874       1 server_linux.go:132] "Using iptables Proxier"
	I1209 03:19:19.398573       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 03:19:19.399079       1 server.go:527] "Version info" version="v1.34.2"
	I1209 03:19:19.399315       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 03:19:19.405374       1 config.go:200] "Starting service config controller"
	I1209 03:19:19.405792       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 03:19:19.405905       1 config.go:309] "Starting node config controller"
	I1209 03:19:19.405927       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 03:19:19.405942       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 03:19:19.406268       1 config.go:106] "Starting endpoint slice config controller"
	I1209 03:19:19.408469       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 03:19:19.406457       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 03:19:19.408543       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 03:19:19.506978       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 03:19:19.509285       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 03:19:19.509389       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a4a0bac8a98646d5c29d58b036502fcf131af10f10fa52995741cab94b0da2a1] <==
	I1209 03:19:17.249755       1 serving.go:386] Generated self-signed cert in-memory
	W1209 03:19:18.574454       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 03:19:18.576965       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 03:19:18.577221       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 03:19:18.577338       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 03:19:18.647296       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1209 03:19:18.647326       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 03:19:18.649857       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 03:19:18.649959       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 03:19:18.650068       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 03:19:18.649976       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 03:19:18.752077       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [de7292ca8714180c12d0930642833f614c66bde369219707289a884965f99ae3] <==
	I1209 03:19:00.538087       1 serving.go:386] Generated self-signed cert in-memory
	W1209 03:19:01.580975       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 03:19:01.581021       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 03:19:01.581032       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 03:19:01.581038       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 03:19:01.675823       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1209 03:19:01.677429       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1209 03:19:01.677508       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1209 03:19:01.684409       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 03:19:01.684570       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 03:19:01.688151       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 03:19:01.684590       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1209 03:19:01.688590       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	E1209 03:19:01.690792       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 03:19:01.690900       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 03:19:01.690981       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1209 03:19:01.691064       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1209 03:19:01.691085       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1209 03:19:01.691111       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 09 03:19:16 pause-739105 kubelet[3875]: E1209 03:19:16.910207    3875 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-739105\" not found" node="pause-739105"
	Dec 09 03:19:17 pause-739105 kubelet[3875]: E1209 03:19:17.911224    3875 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-739105\" not found" node="pause-739105"
	Dec 09 03:19:17 pause-739105 kubelet[3875]: E1209 03:19:17.913121    3875 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-739105\" not found" node="pause-739105"
	Dec 09 03:19:17 pause-739105 kubelet[3875]: E1209 03:19:17.913359    3875 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-739105\" not found" node="pause-739105"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.612458    3875 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-739105"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.696386    3875 apiserver.go:52] "Watching apiserver"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.717694    3875 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.741928    3875 kubelet_node_status.go:124] "Node was previously registered" node="pause-739105"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.742048    3875 kubelet_node_status.go:78] "Successfully registered node" node="pause-739105"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.742076    3875 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.745411    3875 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: E1209 03:19:18.767528    3875 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-739105\" already exists" pod="kube-system/kube-controller-manager-pause-739105"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.767546    3875 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-739105"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.771930    3875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad6d4576-8e92-4abd-8193-d8b9ddd7266d-lib-modules\") pod \"kube-proxy-rxfdq\" (UID: \"ad6d4576-8e92-4abd-8193-d8b9ddd7266d\") " pod="kube-system/kube-proxy-rxfdq"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.771977    3875 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad6d4576-8e92-4abd-8193-d8b9ddd7266d-xtables-lock\") pod \"kube-proxy-rxfdq\" (UID: \"ad6d4576-8e92-4abd-8193-d8b9ddd7266d\") " pod="kube-system/kube-proxy-rxfdq"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: E1209 03:19:18.784434    3875 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-739105\" already exists" pod="kube-system/kube-scheduler-pause-739105"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.784480    3875 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-739105"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: E1209 03:19:18.804539    3875 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-739105\" already exists" pod="kube-system/etcd-pause-739105"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: I1209 03:19:18.804587    3875 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-739105"
	Dec 09 03:19:18 pause-739105 kubelet[3875]: E1209 03:19:18.820664    3875 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-739105\" already exists" pod="kube-system/kube-apiserver-pause-739105"
	Dec 09 03:19:19 pause-739105 kubelet[3875]: I1209 03:19:19.003699    3875 scope.go:117] "RemoveContainer" containerID="a7ae140d28849706ed992a697e3d674291e73d88d78d98be55c0ec624c7bfa79"
	Dec 09 03:19:24 pause-739105 kubelet[3875]: E1209 03:19:24.849893    3875 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765250364849031868 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 09 03:19:24 pause-739105 kubelet[3875]: E1209 03:19:24.849937    3875 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765250364849031868 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 09 03:19:34 pause-739105 kubelet[3875]: E1209 03:19:34.852801    3875 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765250374852310367 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 09 03:19:34 pause-739105 kubelet[3875]: E1209 03:19:34.852825    3875 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765250374852310367 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-739105 -n pause-739105
helpers_test.go:269: (dbg) Run:  kubectl --context pause-739105 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (60.55s)

                                                
                                    

Test pass (364/431)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.36
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.17
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.34.2/json-events 3.4
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.17
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.16
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.17
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.17
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.16
30 TestBinaryMirror 0.68
31 TestOffline 103.34
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 136
40 TestAddons/serial/GCPAuth/Namespaces 0.15
41 TestAddons/serial/GCPAuth/FakeCredentials 9.6
44 TestAddons/parallel/Registry 16.14
45 TestAddons/parallel/RegistryCreds 0.72
47 TestAddons/parallel/InspektorGadget 11.94
48 TestAddons/parallel/MetricsServer 6.39
50 TestAddons/parallel/CSI 51.21
51 TestAddons/parallel/Headlamp 22.15
52 TestAddons/parallel/CloudSpanner 6.82
53 TestAddons/parallel/LocalPath 54.36
54 TestAddons/parallel/NvidiaDevicePlugin 6.88
55 TestAddons/parallel/Yakd 11.87
57 TestAddons/StoppedEnableDisable 86.37
58 TestCertOptions 78.49
59 TestCertExpiration 289.5
61 TestForceSystemdFlag 80.88
62 TestForceSystemdEnv 42.86
67 TestErrorSpam/setup 37.53
68 TestErrorSpam/start 0.38
69 TestErrorSpam/status 0.73
70 TestErrorSpam/pause 1.55
71 TestErrorSpam/unpause 1.92
72 TestErrorSpam/stop 5.17
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 81.28
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 44.36
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.09
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.17
84 TestFunctional/serial/CacheCmd/cache/add_local 1.17
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.07
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.2
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.6
89 TestFunctional/serial/CacheCmd/cache/delete 0.14
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 34.22
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.52
95 TestFunctional/serial/LogsFileCmd 1.47
96 TestFunctional/serial/InvalidService 3.91
98 TestFunctional/parallel/ConfigCmd 0.48
100 TestFunctional/parallel/DryRun 0.25
101 TestFunctional/parallel/InternationalLanguage 0.12
102 TestFunctional/parallel/StatusCmd 0.73
107 TestFunctional/parallel/AddonsCmd 0.18
108 TestFunctional/parallel/PersistentVolumeClaim 80.95
110 TestFunctional/parallel/SSHCmd 0.38
111 TestFunctional/parallel/CpCmd 1.23
112 TestFunctional/parallel/MySQL 53.52
113 TestFunctional/parallel/FileSync 0.18
114 TestFunctional/parallel/CertSync 1.03
118 TestFunctional/parallel/NodeLabels 0.08
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
122 TestFunctional/parallel/License 0.25
132 TestFunctional/parallel/Version/short 0.07
133 TestFunctional/parallel/Version/components 0.46
134 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
135 TestFunctional/parallel/ImageCommands/ImageListTable 0.19
136 TestFunctional/parallel/ImageCommands/ImageListJson 0.19
137 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
138 TestFunctional/parallel/ImageCommands/ImageBuild 2.92
139 TestFunctional/parallel/ImageCommands/Setup 0.49
140 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.94
142 TestFunctional/parallel/ProfileCmd/profile_list 0.37
143 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.85
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.01
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.88
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.67
151 TestFunctional/parallel/MountCmd/any-port 60.96
152 TestFunctional/parallel/MountCmd/specific-port 1.37
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.15
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
157 TestFunctional/parallel/ServiceCmd/List 1.23
158 TestFunctional/parallel/ServiceCmd/JSONOutput 1.25
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 76.18
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 30.58
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.09
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.13
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.1
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.07
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.2
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.62
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.14
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.14
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 59.39
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.41
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.48
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.5
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.47
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.42
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.15
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.74
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.17
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 92.02
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.36
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.23
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 58.2
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.2
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.22
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.07
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.42
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.36
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.45
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.24
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.23
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.2
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.21
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.17
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.23
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.78
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.08
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.08
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.08
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.43
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.4
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.42
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 35.99
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.95
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.06
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.53
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.49
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.89
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.57
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.47
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.11
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 1.21
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 1.22
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 203.28
262 TestMultiControlPlane/serial/DeployApp 7.25
263 TestMultiControlPlane/serial/PingHostFromPods 1.45
264 TestMultiControlPlane/serial/AddWorkerNode 44.96
265 TestMultiControlPlane/serial/NodeLabels 0.07
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.73
267 TestMultiControlPlane/serial/CopyFile 11.67
268 TestMultiControlPlane/serial/StopSecondaryNode 88.01
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.55
270 TestMultiControlPlane/serial/RestartSecondaryNode 42.24
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.92
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 371.74
273 TestMultiControlPlane/serial/DeleteSecondaryNode 18.35
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.57
275 TestMultiControlPlane/serial/StopCluster 261.62
276 TestMultiControlPlane/serial/RestartCluster 101.97
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.55
278 TestMultiControlPlane/serial/AddSecondaryNode 71.75
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.72
284 TestJSONOutput/start/Command 82.04
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.78
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.65
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 7.05
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.27
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 84.91
316 TestMountStart/serial/StartWithMountFirst 22.6
317 TestMountStart/serial/VerifyMountFirst 0.33
318 TestMountStart/serial/StartWithMountSecond 23.35
319 TestMountStart/serial/VerifyMountSecond 0.33
320 TestMountStart/serial/DeleteFirst 0.7
321 TestMountStart/serial/VerifyMountPostDelete 0.33
322 TestMountStart/serial/Stop 1.32
323 TestMountStart/serial/RestartStopped 20.92
324 TestMountStart/serial/VerifyMountPostStop 0.31
327 TestMultiNode/serial/FreshStart2Nodes 101.91
328 TestMultiNode/serial/DeployApp2Nodes 4.95
329 TestMultiNode/serial/PingHostFrom2Pods 0.95
330 TestMultiNode/serial/AddNode 41.3
331 TestMultiNode/serial/MultiNodeLabels 0.07
332 TestMultiNode/serial/ProfileList 0.5
333 TestMultiNode/serial/CopyFile 6.47
334 TestMultiNode/serial/StopNode 2.35
335 TestMultiNode/serial/StartAfterStop 45.33
336 TestMultiNode/serial/RestartKeepsNodes 329.47
337 TestMultiNode/serial/DeleteNode 2.72
338 TestMultiNode/serial/StopMultiNode 163.01
339 TestMultiNode/serial/RestartMultiNode 120.38
340 TestMultiNode/serial/ValidateNameConflict 40.85
347 TestScheduledStopUnix 110.26
351 TestRunningBinaryUpgrade 381.74
353 TestKubernetesUpgrade 96.27
356 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
357 TestNoKubernetes/serial/StartWithK8s 81.53
358 TestNoKubernetes/serial/StartWithStopK8s 31.06
359 TestNoKubernetes/serial/Start 47.96
360 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
361 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
362 TestNoKubernetes/serial/ProfileList 1.14
363 TestNoKubernetes/serial/Stop 1.48
364 TestNoKubernetes/serial/StartNoArgs 35.12
365 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
374 TestPause/serial/Start 106.44
375 TestStoppedBinaryUpgrade/Setup 0.66
376 TestStoppedBinaryUpgrade/Upgrade 78.03
378 TestISOImage/Setup 25.01
379 TestStoppedBinaryUpgrade/MinikubeLogs 1.58
387 TestNetworkPlugins/group/false 4.39
389 TestStartStop/group/old-k8s-version/serial/FirstStart 108.69
394 TestStartStop/group/no-preload/serial/FirstStart 120.59
396 TestStartStop/group/embed-certs/serial/FirstStart 127.7
398 TestISOImage/Binaries/crictl 0.21
399 TestISOImage/Binaries/curl 0.19
400 TestISOImage/Binaries/docker 0.2
401 TestISOImage/Binaries/git 0.19
402 TestISOImage/Binaries/iptables 0.18
403 TestISOImage/Binaries/podman 0.18
404 TestISOImage/Binaries/rsync 0.2
405 TestISOImage/Binaries/socat 0.2
406 TestISOImage/Binaries/wget 0.19
407 TestISOImage/Binaries/VBoxControl 0.2
408 TestISOImage/Binaries/VBoxService 0.2
410 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 141.82
411 TestStartStop/group/old-k8s-version/serial/DeployApp 10.33
412 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.25
413 TestStartStop/group/old-k8s-version/serial/Stop 81.68
414 TestStartStop/group/no-preload/serial/DeployApp 10.34
415 TestStartStop/group/embed-certs/serial/DeployApp 8.3
416 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
417 TestStartStop/group/no-preload/serial/Stop 73.12
418 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
419 TestStartStop/group/embed-certs/serial/Stop 86.63
420 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.28
421 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1
422 TestStartStop/group/default-k8s-diff-port/serial/Stop 90.39
423 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.16
424 TestStartStop/group/old-k8s-version/serial/SecondStart 46.01
425 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
426 TestStartStop/group/no-preload/serial/SecondStart 64.33
427 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
428 TestStartStop/group/embed-certs/serial/SecondStart 53
429 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 13.01
430 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
431 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 53.52
432 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
433 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
434 TestStartStop/group/old-k8s-version/serial/Pause 3.64
436 TestStartStop/group/newest-cni/serial/FirstStart 60.21
437 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.01
438 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 16.01
439 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
440 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.38
441 TestStartStop/group/no-preload/serial/Pause 3.23
442 TestNetworkPlugins/group/auto/Start 93.95
443 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
444 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
445 TestStartStop/group/embed-certs/serial/Pause 3.49
446 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.04
447 TestNetworkPlugins/group/kindnet/Start 74.55
448 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
449 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
450 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.94
451 TestNetworkPlugins/group/calico/Start 116
452 TestStartStop/group/newest-cni/serial/DeployApp 0
453 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.51
454 TestStartStop/group/newest-cni/serial/Stop 8.49
455 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
456 TestStartStop/group/newest-cni/serial/SecondStart 68.96
457 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
458 TestNetworkPlugins/group/auto/KubeletFlags 0.22
459 TestNetworkPlugins/group/auto/NetCatPod 12.32
460 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
461 TestNetworkPlugins/group/kindnet/NetCatPod 12.3
462 TestNetworkPlugins/group/auto/DNS 0.23
463 TestNetworkPlugins/group/auto/Localhost 0.18
464 TestNetworkPlugins/group/auto/HairPin 0.18
465 TestNetworkPlugins/group/kindnet/DNS 0.24
466 TestNetworkPlugins/group/kindnet/Localhost 0.19
467 TestNetworkPlugins/group/kindnet/HairPin 0.22
468 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
469 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
470 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
471 TestStartStop/group/newest-cni/serial/Pause 4.36
472 TestNetworkPlugins/group/custom-flannel/Start 72.37
473 TestNetworkPlugins/group/enable-default-cni/Start 74.26
474 TestNetworkPlugins/group/flannel/Start 110.13
475 TestNetworkPlugins/group/calico/ControllerPod 6.01
476 TestNetworkPlugins/group/calico/KubeletFlags 0.26
477 TestNetworkPlugins/group/calico/NetCatPod 11.79
478 TestNetworkPlugins/group/calico/DNS 0.26
479 TestNetworkPlugins/group/calico/Localhost 0.2
480 TestNetworkPlugins/group/calico/HairPin 0.21
481 TestNetworkPlugins/group/bridge/Start 90.79
482 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
483 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.82
484 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
485 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.34
486 TestNetworkPlugins/group/custom-flannel/DNS 0.22
487 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
488 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
489 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
490 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
491 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
493 TestISOImage/PersistentMounts//data 0.19
494 TestISOImage/PersistentMounts//var/lib/docker 0.19
495 TestISOImage/PersistentMounts//var/lib/cni 0.19
496 TestISOImage/PersistentMounts//var/lib/kubelet 0.19
497 TestISOImage/PersistentMounts//var/lib/minikube 0.2
498 TestISOImage/PersistentMounts//var/lib/toolbox 0.19
499 TestISOImage/PersistentMounts//var/lib/boot2docker 0.2
500 TestISOImage/VersionJSON 0.18
501 TestISOImage/eBPFSupport 0.18
502 TestNetworkPlugins/group/flannel/ControllerPod 6.01
503 TestNetworkPlugins/group/flannel/KubeletFlags 0.17
504 TestNetworkPlugins/group/flannel/NetCatPod 9.23
505 TestNetworkPlugins/group/flannel/DNS 0.15
506 TestNetworkPlugins/group/flannel/Localhost 0.14
507 TestNetworkPlugins/group/flannel/HairPin 0.13
508 TestNetworkPlugins/group/bridge/KubeletFlags 0.18
509 TestNetworkPlugins/group/bridge/NetCatPod 10.24
510 TestNetworkPlugins/group/bridge/DNS 0.16
511 TestNetworkPlugins/group/bridge/Localhost 0.19
512 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (6.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-589255 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-589255 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.362904813s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1209 01:55:42.813989  258854 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1209 01:55:42.814123  258854 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-589255
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-589255: exit status 85 (81.214911ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-589255 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-589255 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 01:55:36
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 01:55:36.508081  258866 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:55:36.508390  258866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:36.508403  258866 out.go:374] Setting ErrFile to fd 2...
	I1209 01:55:36.508408  258866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:36.508594  258866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	W1209 01:55:36.508741  258866 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22081-254936/.minikube/config/config.json: open /home/jenkins/minikube-integration/22081-254936/.minikube/config/config.json: no such file or directory
	I1209 01:55:36.509288  258866 out.go:368] Setting JSON to true
	I1209 01:55:36.510295  258866 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27487,"bootTime":1765217850,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 01:55:36.510362  258866 start.go:143] virtualization: kvm guest
	I1209 01:55:36.515102  258866 out.go:99] [download-only-589255] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1209 01:55:36.515344  258866 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball: no such file or directory
	I1209 01:55:36.515371  258866 notify.go:221] Checking for updates...
	I1209 01:55:36.516714  258866 out.go:171] MINIKUBE_LOCATION=22081
	I1209 01:55:36.518178  258866 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 01:55:36.519602  258866 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 01:55:36.520819  258866 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	I1209 01:55:36.522184  258866 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1209 01:55:36.524726  258866 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 01:55:36.525041  258866 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 01:55:36.560882  258866 out.go:99] Using the kvm2 driver based on user configuration
	I1209 01:55:36.560922  258866 start.go:309] selected driver: kvm2
	I1209 01:55:36.560930  258866 start.go:927] validating driver "kvm2" against <nil>
	I1209 01:55:36.561260  258866 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 01:55:36.561767  258866 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1209 01:55:36.561937  258866 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 01:55:36.561980  258866 cni.go:84] Creating CNI manager for ""
	I1209 01:55:36.562032  258866 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 01:55:36.562042  258866 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 01:55:36.562087  258866 start.go:353] cluster config:
	{Name:download-only-589255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-589255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 01:55:36.562331  258866 iso.go:125] acquiring lock: {Name:mk5e3a22cdf6cd1ed24c9a04adaf1049140c04b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 01:55:36.563900  258866 out.go:99] Downloading VM boot image ...
	I1209 01:55:36.563931  258866 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22081-254936/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
	I1209 01:55:38.987162  258866 out.go:99] Starting "download-only-589255" primary control-plane node in "download-only-589255" cluster
	I1209 01:55:38.987229  258866 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1209 01:55:39.006488  258866 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1209 01:55:39.006530  258866 cache.go:65] Caching tarball of preloaded images
	I1209 01:55:39.006736  258866 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1209 01:55:39.008519  258866 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1209 01:55:39.008541  258866 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1209 01:55:39.028345  258866 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1209 01:55:39.028466  258866 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-589255 host does not exist
	  To start a cluster, run: "minikube start -p download-only-589255"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-589255
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-985564 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-985564 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.400840965s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1209 01:55:46.631246  258854 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1209 01:55:46.631292  258854 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-985564
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-985564: exit status 85 (80.072797ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-589255 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-589255 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ delete  │ -p download-only-589255                                                                                                                                                 │ download-only-589255 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ start   │ -o=json --download-only -p download-only-985564 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-985564 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 01:55:43
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 01:55:43.288163  259059 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:55:43.288421  259059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:43.288430  259059 out.go:374] Setting ErrFile to fd 2...
	I1209 01:55:43.288435  259059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:43.288656  259059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 01:55:43.289196  259059 out.go:368] Setting JSON to true
	I1209 01:55:43.290096  259059 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27493,"bootTime":1765217850,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 01:55:43.290177  259059 start.go:143] virtualization: kvm guest
	I1209 01:55:43.292248  259059 out.go:99] [download-only-985564] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 01:55:43.292461  259059 notify.go:221] Checking for updates...
	I1209 01:55:43.294566  259059 out.go:171] MINIKUBE_LOCATION=22081
	I1209 01:55:43.296410  259059 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 01:55:43.297855  259059 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 01:55:43.299375  259059 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	I1209 01:55:43.300952  259059 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-985564 host does not exist
	  To start a cluster, run: "minikube start -p download-only-985564"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-985564
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-045512 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-045512 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.169460846s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1209 01:55:50.202954  258854 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1209 01:55:50.203002  258854 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-045512
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-045512: exit status 85 (79.446211ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-589255 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-589255 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ delete  │ -p download-only-589255                                                                                                                                                        │ download-only-589255 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ start   │ -o=json --download-only -p download-only-985564 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-985564 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ delete  │ -p download-only-985564                                                                                                                                                        │ download-only-985564 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
	│ start   │ -o=json --download-only -p download-only-045512 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-045512 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 01:55:47
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 01:55:47.089999  259238 out.go:360] Setting OutFile to fd 1 ...
	I1209 01:55:47.090255  259238 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:47.090268  259238 out.go:374] Setting ErrFile to fd 2...
	I1209 01:55:47.090274  259238 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 01:55:47.090494  259238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 01:55:47.091059  259238 out.go:368] Setting JSON to true
	I1209 01:55:47.091938  259238 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27497,"bootTime":1765217850,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 01:55:47.092013  259238 start.go:143] virtualization: kvm guest
	I1209 01:55:47.093981  259238 out.go:99] [download-only-045512] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 01:55:47.094229  259238 notify.go:221] Checking for updates...
	I1209 01:55:47.095767  259238 out.go:171] MINIKUBE_LOCATION=22081
	I1209 01:55:47.097280  259238 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 01:55:47.098896  259238 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 01:55:47.102107  259238 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	I1209 01:55:47.103667  259238 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-045512 host does not exist
	  To start a cluster, run: "minikube start -p download-only-045512"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-045512
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.68s)

                                                
                                                
=== RUN   TestBinaryMirror
I1209 01:55:51.071083  258854 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-413418 --alsologtostderr --binary-mirror http://127.0.0.1:33411 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-413418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-413418
--- PASS: TestBinaryMirror (0.68s)

                                                
                                    
x
+
TestOffline (103.34s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-935850 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-935850 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m42.363573662s)
helpers_test.go:175: Cleaning up "offline-crio-935850" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-935850
--- PASS: TestOffline (103.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1060: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-712341
addons_test.go:1060: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-712341: exit status 85 (66.558683ms)

                                                
                                                
-- stdout --
	* Profile "addons-712341" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-712341"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1071: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-712341
addons_test.go:1071: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-712341: exit status 85 (69.47311ms)

                                                
                                                
-- stdout --
	* Profile "addons-712341" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-712341"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (136s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:113: (dbg) Run:  out/minikube-linux-amd64 start -p addons-712341 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:113: (dbg) Done: out/minikube-linux-amd64 start -p addons-712341 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m15.999961064s)
--- PASS: TestAddons/Setup (136.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:690: (dbg) Run:  kubectl --context addons-712341 create ns new-namespace
addons_test.go:704: (dbg) Run:  kubectl --context addons-712341 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.6s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:735: (dbg) Run:  kubectl --context addons-712341 create -f testdata/busybox.yaml
addons_test.go:742: (dbg) Run:  kubectl --context addons-712341 create sa gcp-auth-test
addons_test.go:748: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [de8fd268-6e5a-4d89-89ef-8d352023a017] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [de8fd268-6e5a-4d89-89ef-8d352023a017] Running
addons_test.go:748: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004551826s
addons_test.go:754: (dbg) Run:  kubectl --context addons-712341 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:766: (dbg) Run:  kubectl --context addons-712341 describe sa gcp-auth-test
addons_test.go:804: (dbg) Run:  kubectl --context addons-712341 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.60s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:442: registry stabilized in 9.234953ms
addons_test.go:444: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-kbblm" [2debdb6b-823b-4310-974e-3cf03104d154] Running
addons_test.go:444: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.106494441s
addons_test.go:447: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-w94f7" [66b090e3-ac51-4b13-a537-2f07c2a6961d] Running
addons_test.go:447: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005807419s
addons_test.go:452: (dbg) Run:  kubectl --context addons-712341 delete po -l run=registry-test --now
addons_test.go:457: (dbg) Run:  kubectl --context addons-712341 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:457: (dbg) Done: kubectl --context addons-712341 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.13051113s)
addons_test.go:471: (dbg) Run:  out/minikube-linux-amd64 -p addons-712341 ip
2025/12/09 01:58:41 [DEBUG] GET http://192.168.39.107:5000
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-712341 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.14s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:383: registry-creds stabilized in 5.636243ms
addons_test.go:385: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-712341
addons_test.go:392: (dbg) Run:  kubectl --context addons-712341 -n kube-system get secret -o yaml
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-712341 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.94s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:883: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-jcdvr" [02727016-53cb-4abf-bd8f-9f52086a94d3] Running
addons_test.go:883: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004942041s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-712341 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-amd64 -p addons-712341 addons disable inspektor-gadget --alsologtostderr -v=1: (5.93614142s)
--- PASS: TestAddons/parallel/InspektorGadget (11.94s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.39s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:515: metrics-server stabilized in 9.812093ms
addons_test.go:517: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-kkqs4" [84337421-94b2-47bc-a027-73f7b42030a3] Running
addons_test.go:517: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.109634724s
addons_test.go:523: (dbg) Run:  kubectl --context addons-712341 top pods -n kube-system
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-712341 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-amd64 -p addons-712341 addons disable metrics-server --alsologtostderr -v=1: (1.178375111s)
--- PASS: TestAddons/parallel/MetricsServer (6.39s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1209 01:58:48.560295  258854 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1209 01:58:48.565522  258854 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1209 01:58:48.565557  258854 kapi.go:107] duration metric: took 5.278982ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:609: csi-hostpath-driver pods stabilized in 5.2941ms
addons_test.go:612: (dbg) Run:  kubectl --context addons-712341 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-712341 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [926a0b75-21bf-4827-9c52-580f55788024] Pending
helpers_test.go:352: "task-pv-pod" [926a0b75-21bf-4827-9c52-580f55788024] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [926a0b75-21bf-4827-9c52-580f55788024] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.00610708s
addons_test.go:632: (dbg) Run:  kubectl --context addons-712341 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:637: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-712341 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-712341 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:642: (dbg) Run:  kubectl --context addons-712341 delete pod task-pv-pod
addons_test.go:642: (dbg) Done: kubectl --context addons-712341 delete pod task-pv-pod: (1.176169559s)
addons_test.go:648: (dbg) Run:  kubectl --context addons-712341 delete pvc hpvc
addons_test.go:654: (dbg) Run:  kubectl --context addons-712341 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:659: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:664: (dbg) Run:  kubectl --context addons-712341 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:669: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [1bc7c6af-3783-4628-b3dc-62cbbe1fcb27] Pending
helpers_test.go:352: "task-pv-pod-restore" [1bc7c6af-3783-4628-b3dc-62cbbe1fcb27] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [1bc7c6af-3783-4628-b3dc-62cbbe1fcb27] Running
addons_test.go:669: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004993545s
addons_test.go:674: (dbg) Run:  kubectl --context addons-712341 delete pod task-pv-pod-restore
addons_test.go:678: (dbg) Run:  kubectl --context addons-712341 delete pvc hpvc-restore
addons_test.go:682: (dbg) Run:  kubectl --context addons-712341 delete volumesnapshot new-snapshot-demo
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-712341 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-712341 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-amd64 -p addons-712341 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.097838493s)
--- PASS: TestAddons/parallel/CSI (51.21s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:868: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-712341 --alsologtostderr -v=1
addons_test.go:873: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-fmtdg" [912f285d-3d6e-4d17-8e12-c81f6802da63] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-fmtdg" [912f285d-3d6e-4d17-8e12-c81f6802da63] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-fmtdg" [912f285d-3d6e-4d17-8e12-c81f6802da63] Running
addons_test.go:873: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.006107647s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-712341 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-amd64 -p addons-712341 addons disable headlamp --alsologtostderr -v=1: (6.206807253s)
--- PASS: TestAddons/parallel/Headlamp (22.15s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:900: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-9v27r" [a1b7a27b-b092-4a03-97a5-4ae3a2011a92] Running
addons_test.go:900: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004761139s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-712341 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.82s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.36s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:1009: (dbg) Run:  kubectl --context addons-712341 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:1015: (dbg) Run:  kubectl --context addons-712341 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:1019: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-712341 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:1022: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [8c9b6a39-b36a-4d2c-a102-ddbb31cd08f7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [8c9b6a39-b36a-4d2c-a102-ddbb31cd08f7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [8c9b6a39-b36a-4d2c-a102-ddbb31cd08f7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:1022: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005043638s
addons_test.go:1027: (dbg) Run:  kubectl --context addons-712341 get pvc test-pvc -o=json
addons_test.go:1036: (dbg) Run:  out/minikube-linux-amd64 -p addons-712341 ssh "cat /opt/local-path-provisioner/pvc-5f1d4e27-646c-4ec7-9bd6-c32e7c190c45_default_test-pvc/file1"
addons_test.go:1048: (dbg) Run:  kubectl --context addons-712341 delete pod test-local-path
addons_test.go:1052: (dbg) Run:  kubectl --context addons-712341 delete pvc test-pvc
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-712341 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-amd64 -p addons-712341 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.466893707s)
--- PASS: TestAddons/parallel/LocalPath (54.36s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.88s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1085: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-44sbc" [046c49b7-0e2c-4126-bc6a-ba9c44dcdfeb] Running
addons_test.go:1085: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.062626709s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-712341 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.88s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1107: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-jzpjn" [2458cc40-6b3a-4f41-9015-bbc21dab3428] Running
addons_test.go:1107: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005093096s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-712341 addons disable yakd --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-amd64 -p addons-712341 addons disable yakd --alsologtostderr -v=1: (5.86369204s)
--- PASS: TestAddons/parallel/Yakd (11.87s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (86.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:177: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-712341
addons_test.go:177: (dbg) Done: out/minikube-linux-amd64 stop -p addons-712341: (1m26.146188091s)
addons_test.go:181: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-712341
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-712341
addons_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-712341
--- PASS: TestAddons/StoppedEnableDisable (86.37s)

                                                
                                    
x
+
TestCertOptions (78.49s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-358032 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-358032 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m16.164264471s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-358032 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-358032 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-358032 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-358032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-358032
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-358032: (1.846095559s)
--- PASS: TestCertOptions (78.49s)

                                                
                                    
x
+
TestCertExpiration (289.5s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-699833 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-699833 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (57.044358806s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-699833 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-699833 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (51.449063645s)
helpers_test.go:175: Cleaning up "cert-expiration-699833" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-699833
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-699833: (1.003794289s)
--- PASS: TestCertExpiration (289.50s)

                                                
                                    
x
+
TestForceSystemdFlag (80.88s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-150140 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1209 03:14:39.455371  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-150140 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m19.748093804s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-150140 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-150140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-150140
--- PASS: TestForceSystemdFlag (80.88s)

                                                
                                    
x
+
TestForceSystemdEnv (42.86s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-239967 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-239967 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (41.825962735s)
helpers_test.go:175: Cleaning up "force-systemd-env-239967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-239967
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-239967: (1.037611904s)
--- PASS: TestForceSystemdEnv (42.86s)

                                                
                                    
x
+
TestErrorSpam/setup (37.53s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-032482 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-032482 --driver=kvm2  --container-runtime=crio
E1209 02:03:08.554788  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:03:08.561321  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:03:08.572864  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:03:08.594499  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:03:08.636102  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:03:08.717645  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:03:08.879337  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:03:09.201209  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:03:09.843309  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:03:11.125432  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:03:13.687024  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:03:18.808443  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-032482 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-032482 --driver=kvm2  --container-runtime=crio: (37.531682373s)
--- PASS: TestErrorSpam/setup (37.53s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-032482 --log_dir /tmp/nospam-032482 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-032482 --log_dir /tmp/nospam-032482 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-032482 --log_dir /tmp/nospam-032482 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-032482 --log_dir /tmp/nospam-032482 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-032482 --log_dir /tmp/nospam-032482 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-032482 --log_dir /tmp/nospam-032482 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-032482 --log_dir /tmp/nospam-032482 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-032482 --log_dir /tmp/nospam-032482 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-032482 --log_dir /tmp/nospam-032482 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.92s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-032482 --log_dir /tmp/nospam-032482 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-032482 --log_dir /tmp/nospam-032482 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-032482 --log_dir /tmp/nospam-032482 unpause
E1209 02:03:29.050858  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestErrorSpam/unpause (1.92s)

                                                
                                    
x
+
TestErrorSpam/stop (5.17s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-032482 --log_dir /tmp/nospam-032482 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-032482 --log_dir /tmp/nospam-032482 stop: (2.098624182s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-032482 --log_dir /tmp/nospam-032482 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-032482 --log_dir /tmp/nospam-032482 stop: (1.40460406s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-032482 --log_dir /tmp/nospam-032482 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-032482 --log_dir /tmp/nospam-032482 stop: (1.664298528s)
--- PASS: TestErrorSpam/stop (5.17s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22081-254936/.minikube/files/etc/test/nested/copy/258854/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.28s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-545294 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1209 02:03:49.532754  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:04:30.495842  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-545294 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m21.276688183s)
--- PASS: TestFunctional/serial/StartWithProxy (81.28s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.36s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1209 02:04:56.253533  258854 config.go:182] Loaded profile config "functional-545294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-545294 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-545294 --alsologtostderr -v=8: (44.362637803s)
functional_test.go:678: soft start took 44.363531932s for "functional-545294" cluster.
I1209 02:05:40.616577  258854 config.go:182] Loaded profile config "functional-545294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (44.36s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-545294 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-545294 cache add registry.k8s.io/pause:3.1: (1.013024343s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-545294 cache add registry.k8s.io/pause:3.3: (1.071775683s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-545294 cache add registry.k8s.io/pause:latest: (1.083549593s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-545294 /tmp/TestFunctionalserialCacheCmdcacheadd_local3269079053/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 cache add minikube-local-cache-test:functional-545294
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 cache delete minikube-local-cache-test:functional-545294
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-545294
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-545294 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (186.45262ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 kubectl -- --context functional-545294 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-545294 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.22s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-545294 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1209 02:05:52.420789  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-545294 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.221924667s)
functional_test.go:776: restart took 34.222075412s for "functional-545294" cluster.
I1209 02:06:21.648045  258854 config.go:182] Loaded profile config "functional-545294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (34.22s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-545294 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-545294 logs: (1.514469735s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 logs --file /tmp/TestFunctionalserialLogsFileCmd2143538455/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-545294 logs --file /tmp/TestFunctionalserialLogsFileCmd2143538455/001/logs.txt: (1.471508931s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.91s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-545294 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-545294
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-545294: exit status 115 (293.597996ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.184:30536 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-545294 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.91s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-545294 config get cpus: exit status 14 (68.980557ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-545294 config get cpus: exit status 14 (71.91378ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-545294 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-545294 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (126.188383ms)

                                                
                                                
-- stdout --
	* [functional-545294] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:07:41.274921  265344 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:07:41.275143  265344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:07:41.275152  265344 out.go:374] Setting ErrFile to fd 2...
	I1209 02:07:41.275157  265344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:07:41.275365  265344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 02:07:41.275803  265344 out.go:368] Setting JSON to false
	I1209 02:07:41.276712  265344 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28211,"bootTime":1765217850,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:07:41.276781  265344 start.go:143] virtualization: kvm guest
	I1209 02:07:41.279139  265344 out.go:179] * [functional-545294] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:07:41.281235  265344 notify.go:221] Checking for updates...
	I1209 02:07:41.281284  265344 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:07:41.282862  265344 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:07:41.284201  265344 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 02:07:41.285746  265344 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	I1209 02:07:41.287397  265344 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:07:41.288892  265344 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:07:41.290995  265344 config.go:182] Loaded profile config "functional-545294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:07:41.291533  265344 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:07:41.324784  265344 out.go:179] * Using the kvm2 driver based on existing profile
	I1209 02:07:41.326611  265344 start.go:309] selected driver: kvm2
	I1209 02:07:41.326637  265344 start.go:927] validating driver "kvm2" against &{Name:functional-545294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-545294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:07:41.326762  265344 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:07:41.329132  265344 out.go:203] 
	W1209 02:07:41.330607  265344 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1209 02:07:41.332103  265344 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-545294 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-545294 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-545294 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (124.531164ms)

                                                
                                                
-- stdout --
	* [functional-545294] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:06:35.903862  264749 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:06:35.904001  264749 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:06:35.904007  264749 out.go:374] Setting ErrFile to fd 2...
	I1209 02:06:35.904011  264749 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:06:35.904320  264749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 02:06:35.904768  264749 out.go:368] Setting JSON to false
	I1209 02:06:35.905643  264749 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28146,"bootTime":1765217850,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:06:35.905707  264749 start.go:143] virtualization: kvm guest
	I1209 02:06:35.908040  264749 out.go:179] * [functional-545294] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1209 02:06:35.909710  264749 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:06:35.909801  264749 notify.go:221] Checking for updates...
	I1209 02:06:35.912889  264749 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:06:35.914321  264749 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 02:06:35.915651  264749 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	I1209 02:06:35.916959  264749 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:06:35.918168  264749 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:06:35.920136  264749 config.go:182] Loaded profile config "functional-545294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:06:35.920795  264749 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:06:35.953903  264749 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1209 02:06:35.955336  264749 start.go:309] selected driver: kvm2
	I1209 02:06:35.955361  264749 start.go:927] validating driver "kvm2" against &{Name:functional-545294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-545294 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:06:35.955486  264749 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:06:35.957796  264749 out.go:203] 
	W1209 02:06:35.959457  264749 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 02:06:35.960977  264749 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (80.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [7da85099-845e-43c0-abe3-694b2e59c644] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004218433s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-545294 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-545294 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-545294 get pvc myclaim -o=json
I1209 02:06:35.671070  258854 retry.go:31] will retry after 2.843829113s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:08fa0e37-0099-4c89-a816-69bf8e4f1504 ResourceVersion:687 Generation:0 CreationTimestamp:2025-12-09 02:06:35 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001dcae40 VolumeMode:0xc001dcae50 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-545294 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-545294 apply -f testdata/storage-provisioner/pod.yaml
I1209 02:06:38.705143  258854 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [849d48f1-dd55-4d4d-b3d4-0c9a679a3db5] Pending
helpers_test.go:352: "sp-pod" [849d48f1-dd55-4d4d-b3d4-0c9a679a3db5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [849d48f1-dd55-4d4d-b3d4-0c9a679a3db5] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 1m4.004355827s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-545294 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-545294 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-545294 delete -f testdata/storage-provisioner/pod.yaml: (1.194287995s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-545294 apply -f testdata/storage-provisioner/pod.yaml
I1209 02:07:44.182602  258854 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [c8feca44-44ed-4eb4-8817-2e317d08cf50] Pending
helpers_test.go:352: "sp-pod" [c8feca44-44ed-4eb4-8817-2e317d08cf50] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.006728796s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-545294 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (80.95s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh -n functional-545294 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 cp functional-545294:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1549653055/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh -n functional-545294 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh -n functional-545294 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (53.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-545294 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-6bcdcbc558-nbwpp" [0f362234-70c0-47ff-afab-c6cb6c695ef6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-6bcdcbc558-nbwpp" [0f362234-70c0-47ff-afab-c6cb6c695ef6] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 45.007592165s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-545294 exec mysql-6bcdcbc558-nbwpp -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-545294 exec mysql-6bcdcbc558-nbwpp -- mysql -ppassword -e "show databases;": exit status 1 (160.705324ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:08:26.996012  258854 retry.go:31] will retry after 875.552252ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-545294 exec mysql-6bcdcbc558-nbwpp -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-545294 exec mysql-6bcdcbc558-nbwpp -- mysql -ppassword -e "show databases;": exit status 1 (190.615863ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:08:28.063436  258854 retry.go:31] will retry after 1.228431768s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-545294 exec mysql-6bcdcbc558-nbwpp -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-545294 exec mysql-6bcdcbc558-nbwpp -- mysql -ppassword -e "show databases;": exit status 1 (184.985988ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:08:29.478204  258854 retry.go:31] will retry after 1.678424425s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-545294 exec mysql-6bcdcbc558-nbwpp -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-545294 exec mysql-6bcdcbc558-nbwpp -- mysql -ppassword -e "show databases;": exit status 1 (167.58024ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:08:31.325468  258854 retry.go:31] will retry after 3.701415203s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-545294 exec mysql-6bcdcbc558-nbwpp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (53.52s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/258854/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh "sudo cat /etc/test/nested/copy/258854/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/258854.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh "sudo cat /etc/ssl/certs/258854.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/258854.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh "sudo cat /usr/share/ca-certificates/258854.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2588542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh "sudo cat /etc/ssl/certs/2588542.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2588542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh "sudo cat /usr/share/ca-certificates/2588542.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-545294 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-545294 ssh "sudo systemctl is-active docker": exit status 1 (203.901935ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-545294 ssh "sudo systemctl is-active containerd": exit status 1 (217.476607ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-545294 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-545294
localhost/kicbase/echo-server:functional-545294
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-545294 image ls --format short --alsologtostderr:
I1209 02:08:35.751126  265720 out.go:360] Setting OutFile to fd 1 ...
I1209 02:08:35.751401  265720 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:08:35.751412  265720 out.go:374] Setting ErrFile to fd 2...
I1209 02:08:35.751416  265720 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:08:35.751664  265720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
I1209 02:08:35.752414  265720 config.go:182] Loaded profile config "functional-545294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:08:35.752537  265720 config.go:182] Loaded profile config "functional-545294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:08:35.754911  265720 ssh_runner.go:195] Run: systemctl --version
I1209 02:08:35.757066  265720 main.go:143] libmachine: domain functional-545294 has defined MAC address 52:54:00:47:35:52 in network mk-functional-545294
I1209 02:08:35.757454  265720 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:47:35:52", ip: ""} in network mk-functional-545294: {Iface:virbr1 ExpiryTime:2025-12-09 03:03:50 +0000 UTC Type:0 Mac:52:54:00:47:35:52 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:functional-545294 Clientid:01:52:54:00:47:35:52}
I1209 02:08:35.757480  265720 main.go:143] libmachine: domain functional-545294 has defined IP address 192.168.39.184 and MAC address 52:54:00:47:35:52 in network mk-functional-545294
I1209 02:08:35.757602  265720 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/functional-545294/id_rsa Username:docker}
I1209 02:08:35.843709  265720 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-545294 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/kicbase/echo-server           │ functional-545294  │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/my-image                      │ functional-545294  │ e27f7afb76c75 │ 1.47MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ d4918ca78576a │ 54.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ localhost/minikube-local-cache-test     │ functional-545294  │ a6e98af18c92e │ 3.33kB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-545294 image ls --format table --alsologtostderr:
I1209 02:08:39.253597  265802 out.go:360] Setting OutFile to fd 1 ...
I1209 02:08:39.253870  265802 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:08:39.253881  265802 out.go:374] Setting ErrFile to fd 2...
I1209 02:08:39.253894  265802 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:08:39.254110  265802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
I1209 02:08:39.254716  265802 config.go:182] Loaded profile config "functional-545294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:08:39.254854  265802 config.go:182] Loaded profile config "functional-545294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:08:39.257018  265802 ssh_runner.go:195] Run: systemctl --version
I1209 02:08:39.259182  265802 main.go:143] libmachine: domain functional-545294 has defined MAC address 52:54:00:47:35:52 in network mk-functional-545294
I1209 02:08:39.259573  265802 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:47:35:52", ip: ""} in network mk-functional-545294: {Iface:virbr1 ExpiryTime:2025-12-09 03:03:50 +0000 UTC Type:0 Mac:52:54:00:47:35:52 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:functional-545294 Clientid:01:52:54:00:47:35:52}
I1209 02:08:39.259602  265802 main.go:143] libmachine: domain functional-545294 has defined IP address 192.168.39.184 and MAC address 52:54:00:47:35:52 in network mk-functional-545294
I1209 02:08:39.259751  265802 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/functional-545294/id_rsa Username:docker}
I1209 02:08:39.343138  265802 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-545294 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"e27f7afb76c753d9290061f0bf706cd31ae714450f7a7f093039ec419a3ad3c7","repoDigests":["localhost/my-image@sha256:d4f11dbbb5d4720952d5a49e527d06708df4a365586cf73d3ee1ef5e9c7aedd7"],"repoTags":["localhost/my-image:functional-545294"],"size":"1468599"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size
":"803724943"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9","public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"54242145"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c54
8e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"592300b3e38e1ebff7ed04e2f91671f699fcbfc0d3ea140ebb9eb808e18188d1","repoDigests":["docker.io/library/e606152ae5b304f7de6daabe0fb2a7fdb50efb0b72812d8d183cb843e0f6f616-tmp@sha256:97977b7ebb9bda2f80f28d858f94807e4ed986422cbf679b9a33df084a74b197"],"repoTags":[],"size":"1466018"},{"id":"a6e98af18c92e3851b06a71451fc80eb168e7fab3632f05c134401c4d5d901bc","repoDigests":["localhost/minikube-local-cache-test@sha256:81d86b9620da89aac12d1a930286eb610591027cd25a1789d8c820aa6b978d29"],"repoTags":["localhost/minikube-local-cache-test:functional-545294"],"size":"3330"},{"
id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io
/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb76
9377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b939
3d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-545294"],"size":"4943877"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-545294 image ls --format json --alsologtostderr:
I1209 02:08:39.058573  265791 out.go:360] Setting OutFile to fd 1 ...
I1209 02:08:39.058689  265791 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:08:39.058698  265791 out.go:374] Setting ErrFile to fd 2...
I1209 02:08:39.058703  265791 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:08:39.058931  265791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
I1209 02:08:39.059459  265791 config.go:182] Loaded profile config "functional-545294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:08:39.059557  265791 config.go:182] Loaded profile config "functional-545294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:08:39.061633  265791 ssh_runner.go:195] Run: systemctl --version
I1209 02:08:39.063888  265791 main.go:143] libmachine: domain functional-545294 has defined MAC address 52:54:00:47:35:52 in network mk-functional-545294
I1209 02:08:39.064286  265791 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:47:35:52", ip: ""} in network mk-functional-545294: {Iface:virbr1 ExpiryTime:2025-12-09 03:03:50 +0000 UTC Type:0 Mac:52:54:00:47:35:52 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:functional-545294 Clientid:01:52:54:00:47:35:52}
I1209 02:08:39.064307  265791 main.go:143] libmachine: domain functional-545294 has defined IP address 192.168.39.184 and MAC address 52:54:00:47:35:52 in network mk-functional-545294
I1209 02:08:39.064445  265791 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/functional-545294/id_rsa Username:docker}
I1209 02:08:39.150063  265791 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-545294 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9
- public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "54242145"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-545294
size: "4943877"
- id: a6e98af18c92e3851b06a71451fc80eb168e7fab3632f05c134401c4d5d901bc
repoDigests:
- localhost/minikube-local-cache-test@sha256:81d86b9620da89aac12d1a930286eb610591027cd25a1789d8c820aa6b978d29
repoTags:
- localhost/minikube-local-cache-test:functional-545294
size: "3330"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-545294 image ls --format yaml --alsologtostderr:
I1209 02:08:35.946646  265731 out.go:360] Setting OutFile to fd 1 ...
I1209 02:08:35.946771  265731 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:08:35.946776  265731 out.go:374] Setting ErrFile to fd 2...
I1209 02:08:35.946780  265731 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:08:35.947008  265731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
I1209 02:08:35.947605  265731 config.go:182] Loaded profile config "functional-545294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:08:35.947718  265731 config.go:182] Loaded profile config "functional-545294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:08:35.949786  265731 ssh_runner.go:195] Run: systemctl --version
I1209 02:08:35.952016  265731 main.go:143] libmachine: domain functional-545294 has defined MAC address 52:54:00:47:35:52 in network mk-functional-545294
I1209 02:08:35.952427  265731 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:47:35:52", ip: ""} in network mk-functional-545294: {Iface:virbr1 ExpiryTime:2025-12-09 03:03:50 +0000 UTC Type:0 Mac:52:54:00:47:35:52 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:functional-545294 Clientid:01:52:54:00:47:35:52}
I1209 02:08:35.952456  265731 main.go:143] libmachine: domain functional-545294 has defined IP address 192.168.39.184 and MAC address 52:54:00:47:35:52 in network mk-functional-545294
I1209 02:08:35.952630  265731 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/functional-545294/id_rsa Username:docker}
I1209 02:08:36.038906  265731 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-545294 ssh pgrep buildkitd: exit status 1 (169.407971ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 image build -t localhost/my-image:functional-545294 testdata/build --alsologtostderr
E1209 02:08:36.262648  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-545294 image build -t localhost/my-image:functional-545294 testdata/build --alsologtostderr: (2.541293732s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-545294 image build -t localhost/my-image:functional-545294 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 592300b3e38
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-545294
--> e27f7afb76c
Successfully tagged localhost/my-image:functional-545294
e27f7afb76c753d9290061f0bf706cd31ae714450f7a7f093039ec419a3ad3c7
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-545294 image build -t localhost/my-image:functional-545294 testdata/build --alsologtostderr:
I1209 02:08:36.313247  265753 out.go:360] Setting OutFile to fd 1 ...
I1209 02:08:36.313530  265753 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:08:36.313540  265753 out.go:374] Setting ErrFile to fd 2...
I1209 02:08:36.313544  265753 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:08:36.313788  265753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
I1209 02:08:36.314364  265753 config.go:182] Loaded profile config "functional-545294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:08:36.315251  265753 config.go:182] Loaded profile config "functional-545294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 02:08:36.317470  265753 ssh_runner.go:195] Run: systemctl --version
I1209 02:08:36.320115  265753 main.go:143] libmachine: domain functional-545294 has defined MAC address 52:54:00:47:35:52 in network mk-functional-545294
I1209 02:08:36.320712  265753 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:47:35:52", ip: ""} in network mk-functional-545294: {Iface:virbr1 ExpiryTime:2025-12-09 03:03:50 +0000 UTC Type:0 Mac:52:54:00:47:35:52 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:functional-545294 Clientid:01:52:54:00:47:35:52}
I1209 02:08:36.320748  265753 main.go:143] libmachine: domain functional-545294 has defined IP address 192.168.39.184 and MAC address 52:54:00:47:35:52 in network mk-functional-545294
I1209 02:08:36.320964  265753 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/functional-545294/id_rsa Username:docker}
I1209 02:08:36.405845  265753 build_images.go:162] Building image from path: /tmp/build.2346373272.tar
I1209 02:08:36.405935  265753 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1209 02:08:36.420631  265753 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2346373272.tar
I1209 02:08:36.426371  265753 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2346373272.tar: stat -c "%s %y" /var/lib/minikube/build/build.2346373272.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2346373272.tar': No such file or directory
I1209 02:08:36.426422  265753 ssh_runner.go:362] scp /tmp/build.2346373272.tar --> /var/lib/minikube/build/build.2346373272.tar (3072 bytes)
I1209 02:08:36.462665  265753 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2346373272
I1209 02:08:36.476725  265753 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2346373272 -xf /var/lib/minikube/build/build.2346373272.tar
I1209 02:08:36.489434  265753 crio.go:315] Building image: /var/lib/minikube/build/build.2346373272
I1209 02:08:36.489528  265753 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-545294 /var/lib/minikube/build/build.2346373272 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1209 02:08:38.756979  265753 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-545294 /var/lib/minikube/build/build.2346373272 --cgroup-manager=cgroupfs: (2.267407884s)
I1209 02:08:38.757070  265753 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2346373272
I1209 02:08:38.774077  265753 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2346373272.tar
I1209 02:08:38.787097  265753 build_images.go:218] Built localhost/my-image:functional-545294 from /tmp/build.2346373272.tar
I1209 02:08:38.787138  265753 build_images.go:134] succeeded building to: functional-545294
I1209 02:08:38.787143  265753 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-545294
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 image load --daemon kicbase/echo-server:functional-545294 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-545294 image load --daemon kicbase/echo-server:functional-545294 --alsologtostderr: (1.735372602s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "296.529028ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "68.838188ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "362.12486ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "75.505741ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 image load --daemon kicbase/echo-server:functional-545294 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-545294
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 image load --daemon kicbase/echo-server:functional-545294 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 image save kicbase/echo-server:functional-545294 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 image rm kicbase/echo-server:functional-545294 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-545294
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 image save --daemon kicbase/echo-server:functional-545294 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-545294
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (60.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-545294 /tmp/TestFunctionalparallelMountCmdany-port4262596526/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765245995969241098" to /tmp/TestFunctionalparallelMountCmdany-port4262596526/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765245995969241098" to /tmp/TestFunctionalparallelMountCmdany-port4262596526/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765245995969241098" to /tmp/TestFunctionalparallelMountCmdany-port4262596526/001/test-1765245995969241098
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-545294 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (162.593024ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 02:06:36.132225  258854 retry.go:31] will retry after 437.05487ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  9 02:06 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  9 02:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  9 02:06 test-1765245995969241098
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh cat /mount-9p/test-1765245995969241098
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-545294 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [ebc811a7-8de7-4db7-ab8a-6466ab5be638] Pending
helpers_test.go:352: "busybox-mount" [ebc811a7-8de7-4db7-ab8a-6466ab5be638] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [ebc811a7-8de7-4db7-ab8a-6466ab5be638] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [ebc811a7-8de7-4db7-ab8a-6466ab5be638] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 59.005555393s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-545294 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-545294 /tmp/TestFunctionalparallelMountCmdany-port4262596526/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (60.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-545294 /tmp/TestFunctionalparallelMountCmdspecific-port8324973/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-545294 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (178.705293ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 02:07:37.110604  258854 retry.go:31] will retry after 483.40439ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-545294 /tmp/TestFunctionalparallelMountCmdspecific-port8324973/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-545294 ssh "sudo umount -f /mount-9p": exit status 1 (168.633121ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-545294 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-545294 /tmp/TestFunctionalparallelMountCmdspecific-port8324973/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-545294 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2567937655/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-545294 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2567937655/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-545294 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2567937655/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-545294 ssh "findmnt -T" /mount1: exit status 1 (184.885462ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 02:07:38.485621  258854 retry.go:31] will retry after 399.77024ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-545294 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-545294 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2567937655/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-545294 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2567937655/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-545294 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2567937655/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-545294 service list: (1.231536989s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-545294 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-545294 service list -o json: (1.245921621s)
functional_test.go:1504: Took "1.246050611s" to run "out/minikube-linux-amd64 -p functional-545294 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.25s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-545294
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-545294
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-545294
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22081-254936/.minikube/files/etc/test/nested/copy/258854/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (76.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-074400 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-074400 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m16.181759698s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (76.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (30.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1209 02:17:54.069840  258854 config.go:182] Loaded profile config "functional-074400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-074400 --alsologtostderr -v=8
E1209 02:18:08.554050  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-074400 --alsologtostderr -v=8: (30.579692769s)
functional_test.go:678: soft start took 30.58013836s for "functional-074400" cluster.
I1209 02:18:24.649976  258854 config.go:182] Loaded profile config "functional-074400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (30.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-074400 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-074400 cache add registry.k8s.io/pause:3.3: (1.043704619s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-074400 cache add registry.k8s.io/pause:latest: (1.089612056s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-074400 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3061223979/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 cache add minikube-local-cache-test:functional-074400
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 cache delete minikube-local-cache-test:functional-074400
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-074400
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074400 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (179.719559ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-074400 cache reload: (1.009707137s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 kubectl -- --context functional-074400 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-074400 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (59.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-074400 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-074400 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (59.387125692s)
functional_test.go:776: restart took 59.387383105s for "functional-074400" cluster.
I1209 02:19:30.761890  258854 config.go:182] Loaded profile config "functional-074400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (59.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-074400 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 logs
E1209 02:19:31.624292  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-074400 logs: (1.408447991s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs63604353/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-074400 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs63604353/001/logs.txt: (1.481663071s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-074400 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-074400
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-074400: exit status 115 (268.940831ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.13:32060 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-074400 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074400 config get cpus: exit status 14 (71.730165ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074400 config get cpus: exit status 14 (78.727472ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-074400 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-074400 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (127.569933ms)

                                                
                                                
-- stdout --
	* [functional-074400] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:21:11.529872  270829 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:21:11.530143  270829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:21:11.530152  270829 out.go:374] Setting ErrFile to fd 2...
	I1209 02:21:11.530156  270829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:21:11.530353  270829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 02:21:11.530801  270829 out.go:368] Setting JSON to false
	I1209 02:21:11.531724  270829 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":29022,"bootTime":1765217850,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:21:11.531797  270829 start.go:143] virtualization: kvm guest
	I1209 02:21:11.534094  270829 out.go:179] * [functional-074400] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 02:21:11.535533  270829 notify.go:221] Checking for updates...
	I1209 02:21:11.535546  270829 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:21:11.537359  270829 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:21:11.538877  270829 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 02:21:11.540527  270829 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	I1209 02:21:11.542407  270829 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:21:11.543979  270829 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:21:11.546189  270829 config.go:182] Loaded profile config "functional-074400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:21:11.547005  270829 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:21:11.582847  270829 out.go:179] * Using the kvm2 driver based on existing profile
	I1209 02:21:11.584221  270829 start.go:309] selected driver: kvm2
	I1209 02:21:11.584244  270829 start.go:927] validating driver "kvm2" against &{Name:functional-074400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-074400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:21:11.584397  270829 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:21:11.586639  270829 out.go:203] 
	W1209 02:21:11.588219  270829 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1209 02:21:11.589712  270829 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-074400 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-074400 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-074400 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (150.664166ms)

                                                
                                                
-- stdout --
	* [functional-074400] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:19:40.053765  269824 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:19:40.053999  269824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:19:40.054023  269824 out.go:374] Setting ErrFile to fd 2...
	I1209 02:19:40.054030  269824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:19:40.054381  269824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 02:19:40.054960  269824 out.go:368] Setting JSON to false
	I1209 02:19:40.056061  269824 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28930,"bootTime":1765217850,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 02:19:40.056145  269824 start.go:143] virtualization: kvm guest
	I1209 02:19:40.058382  269824 out.go:179] * [functional-074400] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1209 02:19:40.059908  269824 notify.go:221] Checking for updates...
	I1209 02:19:40.059978  269824 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 02:19:40.061536  269824 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 02:19:40.063747  269824 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 02:19:40.065663  269824 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	I1209 02:19:40.067169  269824 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 02:19:40.068906  269824 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 02:19:40.073959  269824 config.go:182] Loaded profile config "functional-074400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 02:19:40.074558  269824 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 02:19:40.118281  269824 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1209 02:19:40.119782  269824 start.go:309] selected driver: kvm2
	I1209 02:19:40.119811  269824 start.go:927] validating driver "kvm2" against &{Name:functional-074400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-074400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 02:19:40.120023  269824 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 02:19:40.122647  269824 out.go:203] 
	W1209 02:19:40.126031  269824 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 02:19:40.128285  269824 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (92.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [a490d4a5-6b55-4d29-b267-b700cba89a87] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005564442s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-074400 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-074400 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-074400 get pvc myclaim -o=json
I1209 02:19:45.743114  258854 retry.go:31] will retry after 2.461509375s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:9ec28d63-d986-4452-b4e2-8a019be6ee62 ResourceVersion:710 Generation:0 CreationTimestamp:2025-12-09 02:19:45 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-9ec28d63-d986-4452-b4e2-8a019be6ee62 StorageClassName:0xc001aa8500 VolumeMode:0xc001aa8510 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-074400 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-074400 apply -f testdata/storage-provisioner/pod.yaml
I1209 02:19:48.399019  258854 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [0a7860b8-9827-416f-b125-83a9a78d15d3] Pending
helpers_test.go:352: "sp-pod" [0a7860b8-9827-416f-b125-83a9a78d15d3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [0a7860b8-9827-416f-b125-83a9a78d15d3] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 1m6.231859109s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-074400 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-074400 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-074400 delete -f testdata/storage-provisioner/pod.yaml: (9.284545509s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-074400 apply -f testdata/storage-provisioner/pod.yaml
I1209 02:21:04.319685  258854 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [431c06e4-5599-47d4-8f8e-fe047be3b9b9] Pending
helpers_test.go:352: "sp-pod" [431c06e4-5599-47d4-8f8e-fe047be3b9b9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [431c06e4-5599-47d4-8f8e-fe047be3b9b9] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.006496648s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-074400 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (92.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh -n functional-074400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 cp functional-074400:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3247977113/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh -n functional-074400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh -n functional-074400 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (58.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-074400 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-7d7b65bc95-r489h" [8a98ebf9-c223-495b-9d6b-890b748749e8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-7d7b65bc95-r489h" [8a98ebf9-c223-495b-9d6b-890b748749e8] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 52.004909509s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-074400 exec mysql-7d7b65bc95-r489h -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-074400 exec mysql-7d7b65bc95-r489h -- mysql -ppassword -e "show databases;": exit status 1 (231.559746ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:21:11.320649  258854 retry.go:31] will retry after 744.747451ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-074400 exec mysql-7d7b65bc95-r489h -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-074400 exec mysql-7d7b65bc95-r489h -- mysql -ppassword -e "show databases;": exit status 1 (213.33593ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:21:12.279962  258854 retry.go:31] will retry after 1.882585763s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-074400 exec mysql-7d7b65bc95-r489h -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-074400 exec mysql-7d7b65bc95-r489h -- mysql -ppassword -e "show databases;": exit status 1 (164.72802ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 02:21:14.327595  258854 retry.go:31] will retry after 2.587786242s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-074400 exec mysql-7d7b65bc95-r489h -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (58.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/258854/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh "sudo cat /etc/test/nested/copy/258854/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/258854.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh "sudo cat /etc/ssl/certs/258854.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/258854.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh "sudo cat /usr/share/ca-certificates/258854.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2588542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh "sudo cat /etc/ssl/certs/2588542.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2588542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh "sudo cat /usr/share/ca-certificates/2588542.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-074400 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074400 ssh "sudo systemctl is-active docker": exit status 1 (201.447247ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074400 ssh "sudo systemctl is-active containerd": exit status 1 (213.571487ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-074400 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-074400
localhost/kicbase/echo-server:functional-074400
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-074400 image ls --format short --alsologtostderr:
I1209 02:21:16.908923  271029 out.go:360] Setting OutFile to fd 1 ...
I1209 02:21:16.909191  271029 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:21:16.909201  271029 out.go:374] Setting ErrFile to fd 2...
I1209 02:21:16.909205  271029 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:21:16.909393  271029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
I1209 02:21:16.909979  271029 config.go:182] Loaded profile config "functional-074400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:21:16.910082  271029 config.go:182] Loaded profile config "functional-074400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:21:16.912620  271029 ssh_runner.go:195] Run: systemctl --version
I1209 02:21:16.915542  271029 main.go:143] libmachine: domain functional-074400 has defined MAC address 52:54:00:65:dd:79 in network mk-functional-074400
I1209 02:21:16.916158  271029 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:dd:79", ip: ""} in network mk-functional-074400: {Iface:virbr1 ExpiryTime:2025-12-09 03:16:54 +0000 UTC Type:0 Mac:52:54:00:65:dd:79 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:functional-074400 Clientid:01:52:54:00:65:dd:79}
I1209 02:21:16.916186  271029 main.go:143] libmachine: domain functional-074400 has defined IP address 192.168.39.13 and MAC address 52:54:00:65:dd:79 in network mk-functional-074400
I1209 02:21:16.916374  271029 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/functional-074400/id_rsa Username:docker}
I1209 02:21:17.004506  271029 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-074400 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/kicbase/echo-server           │ functional-074400  │ 9056ab77afb8e │ 4.94MB │
│ localhost/minikube-local-cache-test     │ functional-074400  │ a6e98af18c92e │ 3.33kB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ d4918ca78576a │ 54.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-074400 image ls --format table --alsologtostderr:
I1209 02:21:17.562491  271092 out.go:360] Setting OutFile to fd 1 ...
I1209 02:21:17.562591  271092 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:21:17.562596  271092 out.go:374] Setting ErrFile to fd 2...
I1209 02:21:17.562600  271092 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:21:17.562791  271092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
I1209 02:21:17.563369  271092 config.go:182] Loaded profile config "functional-074400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:21:17.563460  271092 config.go:182] Loaded profile config "functional-074400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:21:17.565549  271092 ssh_runner.go:195] Run: systemctl --version
I1209 02:21:17.567780  271092 main.go:143] libmachine: domain functional-074400 has defined MAC address 52:54:00:65:dd:79 in network mk-functional-074400
I1209 02:21:17.568310  271092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:dd:79", ip: ""} in network mk-functional-074400: {Iface:virbr1 ExpiryTime:2025-12-09 03:16:54 +0000 UTC Type:0 Mac:52:54:00:65:dd:79 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:functional-074400 Clientid:01:52:54:00:65:dd:79}
I1209 02:21:17.568337  271092 main.go:143] libmachine: domain functional-074400 has defined IP address 192.168.39.13 and MAC address 52:54:00:65:dd:79 in network mk-functional-074400
I1209 02:21:17.568532  271092 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/functional-074400/id_rsa Username:docker}
I1209 02:21:17.660162  271092 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-074400 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a6e98af18c92e3851b06a71451fc80eb168e7fab3632f05c134401c4d5d901bc","repoDigests":["localhost/minikube-local-cache-test@sha256:81d86b9620da89aac12d1a930286eb610591027cd25a1789d8c820aa6b978d29"],"repoTags":["localhost/minikube-local-cache-test:functional-074400"],"size":"
3330"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76872535"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d
10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9","public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb
4367f80a2d63c3bb5b172e46fc3b5"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"54242145"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/p
ause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab7
7afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-074400"],"size":"4943877"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-074400 image ls --format json --alsologtostderr:
I1209 02:21:17.347590  271081 out.go:360] Setting OutFile to fd 1 ...
I1209 02:21:17.347697  271081 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:21:17.347703  271081 out.go:374] Setting ErrFile to fd 2...
I1209 02:21:17.347707  271081 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:21:17.347975  271081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
I1209 02:21:17.348591  271081 config.go:182] Loaded profile config "functional-074400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:21:17.348685  271081 config.go:182] Loaded profile config "functional-074400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:21:17.350891  271081 ssh_runner.go:195] Run: systemctl --version
I1209 02:21:17.353219  271081 main.go:143] libmachine: domain functional-074400 has defined MAC address 52:54:00:65:dd:79 in network mk-functional-074400
I1209 02:21:17.353667  271081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:dd:79", ip: ""} in network mk-functional-074400: {Iface:virbr1 ExpiryTime:2025-12-09 03:16:54 +0000 UTC Type:0 Mac:52:54:00:65:dd:79 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:functional-074400 Clientid:01:52:54:00:65:dd:79}
I1209 02:21:17.353698  271081 main.go:143] libmachine: domain functional-074400 has defined IP address 192.168.39.13 and MAC address 52:54:00:65:dd:79 in network mk-functional-074400
I1209 02:21:17.353841  271081 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/functional-074400/id_rsa Username:docker}
I1209 02:21:17.434782  271081 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-074400 image ls --format yaml --alsologtostderr:
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-074400
size: "4943877"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9
- public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "54242145"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: a6e98af18c92e3851b06a71451fc80eb168e7fab3632f05c134401c4d5d901bc
repoDigests:
- localhost/minikube-local-cache-test@sha256:81d86b9620da89aac12d1a930286eb610591027cd25a1789d8c820aa6b978d29
repoTags:
- localhost/minikube-local-cache-test:functional-074400
size: "3330"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-074400 image ls --format yaml --alsologtostderr:
I1209 02:21:17.145172  271050 out.go:360] Setting OutFile to fd 1 ...
I1209 02:21:17.145452  271050 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:21:17.145470  271050 out.go:374] Setting ErrFile to fd 2...
I1209 02:21:17.145473  271050 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:21:17.145660  271050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
I1209 02:21:17.146298  271050 config.go:182] Loaded profile config "functional-074400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:21:17.146400  271050 config.go:182] Loaded profile config "functional-074400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:21:17.148995  271050 ssh_runner.go:195] Run: systemctl --version
I1209 02:21:17.152255  271050 main.go:143] libmachine: domain functional-074400 has defined MAC address 52:54:00:65:dd:79 in network mk-functional-074400
I1209 02:21:17.152922  271050 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:dd:79", ip: ""} in network mk-functional-074400: {Iface:virbr1 ExpiryTime:2025-12-09 03:16:54 +0000 UTC Type:0 Mac:52:54:00:65:dd:79 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:functional-074400 Clientid:01:52:54:00:65:dd:79}
I1209 02:21:17.152971  271050 main.go:143] libmachine: domain functional-074400 has defined IP address 192.168.39.13 and MAC address 52:54:00:65:dd:79 in network mk-functional-074400
I1209 02:21:17.153190  271050 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/functional-074400/id_rsa Username:docker}
I1209 02:21:17.234680  271050 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074400 ssh pgrep buildkitd: exit status 1 (175.662964ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 image build -t localhost/my-image:functional-074400 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-074400 image build -t localhost/my-image:functional-074400 testdata/build --alsologtostderr: (2.774068854s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-074400 image build -t localhost/my-image:functional-074400 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 38547b602f9
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-074400
--> d33aa61b8f6
Successfully tagged localhost/my-image:functional-074400
d33aa61b8f6ea1d48bb0a5a5b028ac49e5c0eefd6ec91389ad4698964115270f
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-074400 image build -t localhost/my-image:functional-074400 testdata/build --alsologtostderr:
I1209 02:21:17.315458  271071 out.go:360] Setting OutFile to fd 1 ...
I1209 02:21:17.315752  271071 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:21:17.315763  271071 out.go:374] Setting ErrFile to fd 2...
I1209 02:21:17.315767  271071 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 02:21:17.316031  271071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
I1209 02:21:17.316632  271071 config.go:182] Loaded profile config "functional-074400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:21:17.317432  271071 config.go:182] Loaded profile config "functional-074400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 02:21:17.319610  271071 ssh_runner.go:195] Run: systemctl --version
I1209 02:21:17.322018  271071 main.go:143] libmachine: domain functional-074400 has defined MAC address 52:54:00:65:dd:79 in network mk-functional-074400
I1209 02:21:17.322433  271071 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:dd:79", ip: ""} in network mk-functional-074400: {Iface:virbr1 ExpiryTime:2025-12-09 03:16:54 +0000 UTC Type:0 Mac:52:54:00:65:dd:79 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:functional-074400 Clientid:01:52:54:00:65:dd:79}
I1209 02:21:17.322478  271071 main.go:143] libmachine: domain functional-074400 has defined IP address 192.168.39.13 and MAC address 52:54:00:65:dd:79 in network mk-functional-074400
I1209 02:21:17.322882  271071 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/functional-074400/id_rsa Username:docker}
I1209 02:21:17.408754  271071 build_images.go:162] Building image from path: /tmp/build.4079070108.tar
I1209 02:21:17.408852  271071 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1209 02:21:17.425554  271071 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4079070108.tar
I1209 02:21:17.431680  271071 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4079070108.tar: stat -c "%s %y" /var/lib/minikube/build/build.4079070108.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4079070108.tar': No such file or directory
I1209 02:21:17.431732  271071 ssh_runner.go:362] scp /tmp/build.4079070108.tar --> /var/lib/minikube/build/build.4079070108.tar (3072 bytes)
I1209 02:21:17.477493  271071 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4079070108
I1209 02:21:17.491742  271071 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4079070108 -xf /var/lib/minikube/build/build.4079070108.tar
I1209 02:21:17.511030  271071 crio.go:315] Building image: /var/lib/minikube/build/build.4079070108
I1209 02:21:17.511114  271071 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-074400 /var/lib/minikube/build/build.4079070108 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1209 02:21:19.987184  271071 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-074400 /var/lib/minikube/build/build.4079070108 --cgroup-manager=cgroupfs: (2.476042478s)
I1209 02:21:19.987310  271071 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4079070108
I1209 02:21:20.004779  271071 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4079070108.tar
I1209 02:21:20.018773  271071 build_images.go:218] Built localhost/my-image:functional-074400 from /tmp/build.4079070108.tar
I1209 02:21:20.018842  271071 build_images.go:134] succeeded building to: functional-074400
I1209 02:21:20.018850  271071 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 image ls
E1209 02:21:29.394466  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:21:29.401015  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:21:29.412618  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:21:29.434210  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:21:29.476016  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:21:29.557645  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:21:29.719556  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:21:30.042056  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:21:30.683455  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:21:31.964983  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:21:34.526365  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:21:39.648640  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:21:49.890708  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:22:10.373073  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:22:51.334896  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:23:08.553114  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:24:13.256520  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:26:29.394621  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:26:57.098294  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:28:08.553754  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-074400
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 image load --daemon kicbase/echo-server:functional-074400 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-074400 image load --daemon kicbase/echo-server:functional-074400 --alsologtostderr: (1.449446698s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "322.878106ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "79.605581ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "345.919198ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "69.246367ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (35.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-074400 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3729681718/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765246780307018639" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3729681718/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765246780307018639" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3729681718/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765246780307018639" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3729681718/001/test-1765246780307018639
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074400 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (181.808227ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 02:19:40.489318  258854 retry.go:31] will retry after 380.987434ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  9 02:19 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  9 02:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  9 02:19 test-1765246780307018639
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh cat /mount-9p/test-1765246780307018639
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-074400 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [315ec66e-345d-4c53-a2a6-50f943add31b] Pending
helpers_test.go:352: "busybox-mount" [315ec66e-345d-4c53-a2a6-50f943add31b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [315ec66e-345d-4c53-a2a6-50f943add31b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [315ec66e-345d-4c53-a2a6-50f943add31b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 34.004276131s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-074400 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-074400 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3729681718/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (35.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 image load --daemon kicbase/echo-server:functional-074400 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-074400
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 image load --daemon kicbase/echo-server:functional-074400 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 image save kicbase/echo-server:functional-074400 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 image rm kicbase/echo-server:functional-074400 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-074400
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 image save --daemon kicbase/echo-server:functional-074400 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-074400
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-074400 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo273279271/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074400 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (181.880212ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 02:20:16.478593  258854 retry.go:31] will retry after 580.078999ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-074400 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo273279271/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074400 ssh "sudo umount -f /mount-9p": exit status 1 (163.267313ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-074400 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-074400 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo273279271/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-074400 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2669497316/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-074400 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2669497316/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-074400 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2669497316/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074400 ssh "findmnt -T" /mount1: exit status 1 (195.899363ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 02:20:17.967878  258854 retry.go:31] will retry after 368.54974ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-074400 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-074400 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2669497316/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-074400 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2669497316/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-074400 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2669497316/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-074400 service list: (1.211197781s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-074400 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-074400 service list -o json: (1.218016156s)
functional_test.go:1504: Took "1.218119066s" to run "out/minikube-linux-amd64 -p functional-074400 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-074400
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-074400
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-074400
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1209 02:31:29.394757  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:33:08.554052  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-134907 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m22.668921746s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (203.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-134907 kubectl -- rollout status deployment/busybox: (4.724540592s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 kubectl -- exec busybox-7b57f96db7-7dxkk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 kubectl -- exec busybox-7b57f96db7-dcdzd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 kubectl -- exec busybox-7b57f96db7-k8vzd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 kubectl -- exec busybox-7b57f96db7-7dxkk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 kubectl -- exec busybox-7b57f96db7-dcdzd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 kubectl -- exec busybox-7b57f96db7-k8vzd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 kubectl -- exec busybox-7b57f96db7-7dxkk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 kubectl -- exec busybox-7b57f96db7-dcdzd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 kubectl -- exec busybox-7b57f96db7-k8vzd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 kubectl -- exec busybox-7b57f96db7-7dxkk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 kubectl -- exec busybox-7b57f96db7-7dxkk -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 kubectl -- exec busybox-7b57f96db7-dcdzd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 kubectl -- exec busybox-7b57f96db7-dcdzd -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 kubectl -- exec busybox-7b57f96db7-k8vzd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 kubectl -- exec busybox-7b57f96db7-k8vzd -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-134907 node add --alsologtostderr -v 5: (44.21005749s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-134907 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (11.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 cp testdata/cp-test.txt ha-134907:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 cp ha-134907:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4289229874/001/cp-test_ha-134907.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 cp ha-134907:/home/docker/cp-test.txt ha-134907-m02:/home/docker/cp-test_ha-134907_ha-134907-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m02 "sudo cat /home/docker/cp-test_ha-134907_ha-134907-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 cp ha-134907:/home/docker/cp-test.txt ha-134907-m03:/home/docker/cp-test_ha-134907_ha-134907-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m03 "sudo cat /home/docker/cp-test_ha-134907_ha-134907-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 cp ha-134907:/home/docker/cp-test.txt ha-134907-m04:/home/docker/cp-test_ha-134907_ha-134907-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m04 "sudo cat /home/docker/cp-test_ha-134907_ha-134907-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 cp testdata/cp-test.txt ha-134907-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 cp ha-134907-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4289229874/001/cp-test_ha-134907-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 cp ha-134907-m02:/home/docker/cp-test.txt ha-134907:/home/docker/cp-test_ha-134907-m02_ha-134907.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907 "sudo cat /home/docker/cp-test_ha-134907-m02_ha-134907.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 cp ha-134907-m02:/home/docker/cp-test.txt ha-134907-m03:/home/docker/cp-test_ha-134907-m02_ha-134907-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m03 "sudo cat /home/docker/cp-test_ha-134907-m02_ha-134907-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 cp ha-134907-m02:/home/docker/cp-test.txt ha-134907-m04:/home/docker/cp-test_ha-134907-m02_ha-134907-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m04 "sudo cat /home/docker/cp-test_ha-134907-m02_ha-134907-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 cp testdata/cp-test.txt ha-134907-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 cp ha-134907-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4289229874/001/cp-test_ha-134907-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 cp ha-134907-m03:/home/docker/cp-test.txt ha-134907:/home/docker/cp-test_ha-134907-m03_ha-134907.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907 "sudo cat /home/docker/cp-test_ha-134907-m03_ha-134907.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 cp ha-134907-m03:/home/docker/cp-test.txt ha-134907-m02:/home/docker/cp-test_ha-134907-m03_ha-134907-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m02 "sudo cat /home/docker/cp-test_ha-134907-m03_ha-134907-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 cp ha-134907-m03:/home/docker/cp-test.txt ha-134907-m04:/home/docker/cp-test_ha-134907-m03_ha-134907-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m04 "sudo cat /home/docker/cp-test_ha-134907-m03_ha-134907-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 cp testdata/cp-test.txt ha-134907-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 cp ha-134907-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4289229874/001/cp-test_ha-134907-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 cp ha-134907-m04:/home/docker/cp-test.txt ha-134907:/home/docker/cp-test_ha-134907-m04_ha-134907.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907 "sudo cat /home/docker/cp-test_ha-134907-m04_ha-134907.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 cp ha-134907-m04:/home/docker/cp-test.txt ha-134907-m02:/home/docker/cp-test_ha-134907-m04_ha-134907-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m02 "sudo cat /home/docker/cp-test_ha-134907-m04_ha-134907-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 cp ha-134907-m04:/home/docker/cp-test.txt ha-134907-m03:/home/docker/cp-test_ha-134907-m04_ha-134907-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 ssh -n ha-134907-m03 "sudo cat /home/docker/cp-test_ha-134907-m04_ha-134907-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (11.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (88.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 node stop m02 --alsologtostderr -v 5
E1209 02:34:39.454979  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:34:39.461647  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:34:39.473167  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:34:39.494741  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:34:39.536241  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:34:39.617798  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:34:39.779411  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:34:40.101167  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:34:40.743544  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:34:42.024965  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:34:44.587361  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:34:49.709458  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:34:59.950901  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:35:20.433347  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-134907 node stop m02 --alsologtostderr -v 5: (1m27.463219918s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-134907 status --alsologtostderr -v 5: exit status 7 (550.106466ms)

                                                
                                                
-- stdout --
	ha-134907
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-134907-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-134907-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-134907-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:35:50.129460  276075 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:35:50.129776  276075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:35:50.129789  276075 out.go:374] Setting ErrFile to fd 2...
	I1209 02:35:50.129794  276075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:35:50.130036  276075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 02:35:50.130252  276075 out.go:368] Setting JSON to false
	I1209 02:35:50.130288  276075 mustload.go:66] Loading cluster: ha-134907
	I1209 02:35:50.130401  276075 notify.go:221] Checking for updates...
	I1209 02:35:50.130835  276075 config.go:182] Loaded profile config "ha-134907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:35:50.130863  276075 status.go:174] checking status of ha-134907 ...
	I1209 02:35:50.133700  276075 status.go:371] ha-134907 host status = "Running" (err=<nil>)
	I1209 02:35:50.133730  276075 host.go:66] Checking if "ha-134907" exists ...
	I1209 02:35:50.137004  276075 main.go:143] libmachine: domain ha-134907 has defined MAC address 52:54:00:e9:c8:cf in network mk-ha-134907
	I1209 02:35:50.137946  276075 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e9:c8:cf", ip: ""} in network mk-ha-134907: {Iface:virbr1 ExpiryTime:2025-12-09 03:30:09 +0000 UTC Type:0 Mac:52:54:00:e9:c8:cf Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:ha-134907 Clientid:01:52:54:00:e9:c8:cf}
	I1209 02:35:50.137999  276075 main.go:143] libmachine: domain ha-134907 has defined IP address 192.168.39.204 and MAC address 52:54:00:e9:c8:cf in network mk-ha-134907
	I1209 02:35:50.138226  276075 host.go:66] Checking if "ha-134907" exists ...
	I1209 02:35:50.138489  276075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:35:50.141340  276075 main.go:143] libmachine: domain ha-134907 has defined MAC address 52:54:00:e9:c8:cf in network mk-ha-134907
	I1209 02:35:50.141867  276075 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e9:c8:cf", ip: ""} in network mk-ha-134907: {Iface:virbr1 ExpiryTime:2025-12-09 03:30:09 +0000 UTC Type:0 Mac:52:54:00:e9:c8:cf Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:ha-134907 Clientid:01:52:54:00:e9:c8:cf}
	I1209 02:35:50.141905  276075 main.go:143] libmachine: domain ha-134907 has defined IP address 192.168.39.204 and MAC address 52:54:00:e9:c8:cf in network mk-ha-134907
	I1209 02:35:50.142108  276075 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/ha-134907/id_rsa Username:docker}
	I1209 02:35:50.235100  276075 ssh_runner.go:195] Run: systemctl --version
	I1209 02:35:50.243066  276075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:35:50.263642  276075 kubeconfig.go:125] found "ha-134907" server: "https://192.168.39.254:8443"
	I1209 02:35:50.263690  276075 api_server.go:166] Checking apiserver status ...
	I1209 02:35:50.263749  276075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 02:35:50.287968  276075 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1423/cgroup
	W1209 02:35:50.302479  276075 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1423/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1209 02:35:50.302548  276075 ssh_runner.go:195] Run: ls
	I1209 02:35:50.308645  276075 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1209 02:35:50.314069  276075 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1209 02:35:50.314096  276075 status.go:463] ha-134907 apiserver status = Running (err=<nil>)
	I1209 02:35:50.314107  276075 status.go:176] ha-134907 status: &{Name:ha-134907 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:35:50.314151  276075 status.go:174] checking status of ha-134907-m02 ...
	I1209 02:35:50.316188  276075 status.go:371] ha-134907-m02 host status = "Stopped" (err=<nil>)
	I1209 02:35:50.316206  276075 status.go:384] host is not running, skipping remaining checks
	I1209 02:35:50.316213  276075 status.go:176] ha-134907-m02 status: &{Name:ha-134907-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:35:50.316235  276075 status.go:174] checking status of ha-134907-m03 ...
	I1209 02:35:50.317604  276075 status.go:371] ha-134907-m03 host status = "Running" (err=<nil>)
	I1209 02:35:50.317625  276075 host.go:66] Checking if "ha-134907-m03" exists ...
	I1209 02:35:50.320157  276075 main.go:143] libmachine: domain ha-134907-m03 has defined MAC address 52:54:00:7e:43:77 in network mk-ha-134907
	I1209 02:35:50.320685  276075 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:77", ip: ""} in network mk-ha-134907: {Iface:virbr1 ExpiryTime:2025-12-09 03:32:10 +0000 UTC Type:0 Mac:52:54:00:7e:43:77 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-134907-m03 Clientid:01:52:54:00:7e:43:77}
	I1209 02:35:50.320711  276075 main.go:143] libmachine: domain ha-134907-m03 has defined IP address 192.168.39.130 and MAC address 52:54:00:7e:43:77 in network mk-ha-134907
	I1209 02:35:50.320898  276075 host.go:66] Checking if "ha-134907-m03" exists ...
	I1209 02:35:50.321173  276075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:35:50.323896  276075 main.go:143] libmachine: domain ha-134907-m03 has defined MAC address 52:54:00:7e:43:77 in network mk-ha-134907
	I1209 02:35:50.324313  276075 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7e:43:77", ip: ""} in network mk-ha-134907: {Iface:virbr1 ExpiryTime:2025-12-09 03:32:10 +0000 UTC Type:0 Mac:52:54:00:7e:43:77 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-134907-m03 Clientid:01:52:54:00:7e:43:77}
	I1209 02:35:50.324349  276075 main.go:143] libmachine: domain ha-134907-m03 has defined IP address 192.168.39.130 and MAC address 52:54:00:7e:43:77 in network mk-ha-134907
	I1209 02:35:50.324539  276075 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/ha-134907-m03/id_rsa Username:docker}
	I1209 02:35:50.417233  276075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:35:50.438810  276075 kubeconfig.go:125] found "ha-134907" server: "https://192.168.39.254:8443"
	I1209 02:35:50.438878  276075 api_server.go:166] Checking apiserver status ...
	I1209 02:35:50.438929  276075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 02:35:50.461590  276075 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1812/cgroup
	W1209 02:35:50.475076  276075 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1812/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1209 02:35:50.475200  276075 ssh_runner.go:195] Run: ls
	I1209 02:35:50.481221  276075 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1209 02:35:50.486464  276075 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1209 02:35:50.486496  276075 status.go:463] ha-134907-m03 apiserver status = Running (err=<nil>)
	I1209 02:35:50.486509  276075 status.go:176] ha-134907-m03 status: &{Name:ha-134907-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:35:50.486533  276075 status.go:174] checking status of ha-134907-m04 ...
	I1209 02:35:50.488178  276075 status.go:371] ha-134907-m04 host status = "Running" (err=<nil>)
	I1209 02:35:50.488196  276075 host.go:66] Checking if "ha-134907-m04" exists ...
	I1209 02:35:50.490720  276075 main.go:143] libmachine: domain ha-134907-m04 has defined MAC address 52:54:00:5f:34:29 in network mk-ha-134907
	I1209 02:35:50.491212  276075 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:29", ip: ""} in network mk-ha-134907: {Iface:virbr1 ExpiryTime:2025-12-09 03:33:42 +0000 UTC Type:0 Mac:52:54:00:5f:34:29 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:ha-134907-m04 Clientid:01:52:54:00:5f:34:29}
	I1209 02:35:50.491236  276075 main.go:143] libmachine: domain ha-134907-m04 has defined IP address 192.168.39.64 and MAC address 52:54:00:5f:34:29 in network mk-ha-134907
	I1209 02:35:50.491394  276075 host.go:66] Checking if "ha-134907-m04" exists ...
	I1209 02:35:50.491709  276075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:35:50.493860  276075 main.go:143] libmachine: domain ha-134907-m04 has defined MAC address 52:54:00:5f:34:29 in network mk-ha-134907
	I1209 02:35:50.494276  276075 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5f:34:29", ip: ""} in network mk-ha-134907: {Iface:virbr1 ExpiryTime:2025-12-09 03:33:42 +0000 UTC Type:0 Mac:52:54:00:5f:34:29 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:ha-134907-m04 Clientid:01:52:54:00:5f:34:29}
	I1209 02:35:50.494300  276075 main.go:143] libmachine: domain ha-134907-m04 has defined IP address 192.168.39.64 and MAC address 52:54:00:5f:34:29 in network mk-ha-134907
	I1209 02:35:50.494418  276075 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/ha-134907-m04/id_rsa Username:docker}
	I1209 02:35:50.586926  276075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:35:50.608109  276075 status.go:176] ha-134907-m04 status: &{Name:ha-134907-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (88.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (42.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 node start m02 --alsologtostderr -v 5
E1209 02:36:01.396036  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:36:11.627565  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:36:29.394062  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-134907 node start m02 --alsologtostderr -v 5: (41.157141871s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-134907 status --alsologtostderr -v 5: (1.008661329s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (42.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (371.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 stop --alsologtostderr -v 5
E1209 02:37:23.320864  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:37:52.459993  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:38:08.554128  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:39:39.455411  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:40:07.166615  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-134907 stop --alsologtostderr -v 5: (4m4.818952976s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 start --wait true --alsologtostderr -v 5
E1209 02:41:29.395090  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-134907 start --wait true --alsologtostderr -v 5: (2m6.766259083s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (371.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-134907 node delete m03 --alsologtostderr -v 5: (17.648769158s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (261.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 stop --alsologtostderr -v 5
E1209 02:43:08.553916  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:44:39.455262  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:46:29.394771  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-134907 stop --alsologtostderr -v 5: (4m21.552046453s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-134907 status --alsologtostderr -v 5: exit status 7 (71.458224ms)

                                                
                                                
-- stdout --
	ha-134907
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-134907-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-134907-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:47:26.609768  279464 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:47:26.609930  279464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:47:26.609943  279464 out.go:374] Setting ErrFile to fd 2...
	I1209 02:47:26.609949  279464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:47:26.610146  279464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 02:47:26.610351  279464 out.go:368] Setting JSON to false
	I1209 02:47:26.610384  279464 mustload.go:66] Loading cluster: ha-134907
	I1209 02:47:26.610506  279464 notify.go:221] Checking for updates...
	I1209 02:47:26.610890  279464 config.go:182] Loaded profile config "ha-134907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:47:26.610911  279464 status.go:174] checking status of ha-134907 ...
	I1209 02:47:26.613271  279464 status.go:371] ha-134907 host status = "Stopped" (err=<nil>)
	I1209 02:47:26.613293  279464 status.go:384] host is not running, skipping remaining checks
	I1209 02:47:26.613303  279464 status.go:176] ha-134907 status: &{Name:ha-134907 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:47:26.613327  279464 status.go:174] checking status of ha-134907-m02 ...
	I1209 02:47:26.615132  279464 status.go:371] ha-134907-m02 host status = "Stopped" (err=<nil>)
	I1209 02:47:26.615152  279464 status.go:384] host is not running, skipping remaining checks
	I1209 02:47:26.615158  279464 status.go:176] ha-134907-m02 status: &{Name:ha-134907-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:47:26.615174  279464 status.go:174] checking status of ha-134907-m04 ...
	I1209 02:47:26.616441  279464 status.go:371] ha-134907-m04 host status = "Stopped" (err=<nil>)
	I1209 02:47:26.616459  279464 status.go:384] host is not running, skipping remaining checks
	I1209 02:47:26.616464  279464 status.go:176] ha-134907-m04 status: &{Name:ha-134907-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (261.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (101.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1209 02:48:08.554964  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-134907 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m41.280474231s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (101.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (71.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 node add --control-plane --alsologtostderr -v 5
E1209 02:49:39.454766  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-134907 node add --control-plane --alsologtostderr -v 5: (1m11.014860984s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-134907 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (71.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.72s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.04s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-404881 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1209 02:51:02.528893  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:51:29.394390  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-404881 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m22.038774551s)
--- PASS: TestJSONOutput/start/Command (82.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-404881 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-404881 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.05s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-404881 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-404881 --output=json --user=testUser: (7.053559415s)
--- PASS: TestJSONOutput/stop/Command (7.05s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-638143 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-638143 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (88.317745ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fa653308-7591-4ce1-b506-2bb9d9f679d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-638143] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"84b392a1-49ff-4f49-b927-7b82f23e8208","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22081"}}
	{"specversion":"1.0","id":"2227540a-ad7d-4582-9c53-ef922dfa3f69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f2378bc6-019b-4911-a0ad-1ee9e9f8c3b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig"}}
	{"specversion":"1.0","id":"2a5958a2-7457-4082-8c97-87f1f7d09b37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube"}}
	{"specversion":"1.0","id":"96d9bab8-f441-4dcd-a54f-7e2a9dd0fd94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b19d980b-df02-4db7-8b67-c707687735ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"20cf7f29-e157-459b-8a50-12fd89d8f50b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-638143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-638143
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (84.91s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-901380 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-901380 --driver=kvm2  --container-runtime=crio: (41.42998887s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-904186 --driver=kvm2  --container-runtime=crio
E1209 02:52:51.630077  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:53:08.553452  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-904186 --driver=kvm2  --container-runtime=crio: (40.724781293s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-901380
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-904186
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-904186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-904186
helpers_test.go:175: Cleaning up "first-901380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-901380
--- PASS: TestMinikubeProfile (84.91s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (22.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-878418 --memory=3072 --mount-string /tmp/TestMountStartserial967437950/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-878418 --memory=3072 --mount-string /tmp/TestMountStartserial967437950/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (21.598449518s)
--- PASS: TestMountStart/serial/StartWithMountFirst (22.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-878418 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-878418 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (23.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-894930 --memory=3072 --mount-string /tmp/TestMountStartserial967437950/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-894930 --memory=3072 --mount-string /tmp/TestMountStartserial967437950/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.354565018s)
--- PASS: TestMountStart/serial/StartWithMountSecond (23.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-894930 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-894930 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-878418 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-894930 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-894930 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-894930
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-894930: (1.320761801s)
--- PASS: TestMountStart/serial/Stop (1.32s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.92s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-894930
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-894930: (19.923529841s)
--- PASS: TestMountStart/serial/RestartStopped (20.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-894930 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-894930 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (101.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-999895 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1209 02:54:32.462062  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:54:39.455646  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-999895 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m41.539375326s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (101.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999895 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999895 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-999895 -- rollout status deployment/busybox: (3.249975543s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999895 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999895 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999895 -- exec busybox-7b57f96db7-lh4f4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999895 -- exec busybox-7b57f96db7-st82v -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999895 -- exec busybox-7b57f96db7-lh4f4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999895 -- exec busybox-7b57f96db7-st82v -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999895 -- exec busybox-7b57f96db7-lh4f4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999895 -- exec busybox-7b57f96db7-st82v -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.95s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999895 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999895 -- exec busybox-7b57f96db7-lh4f4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999895 -- exec busybox-7b57f96db7-lh4f4 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999895 -- exec busybox-7b57f96db7-st82v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-999895 -- exec busybox-7b57f96db7-st82v -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-999895 -v=5 --alsologtostderr
E1209 02:56:29.394752  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-999895 -v=5 --alsologtostderr: (40.809152682s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.30s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-999895 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.50s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 cp testdata/cp-test.txt multinode-999895:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 ssh -n multinode-999895 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 cp multinode-999895:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3081298566/001/cp-test_multinode-999895.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 ssh -n multinode-999895 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 cp multinode-999895:/home/docker/cp-test.txt multinode-999895-m02:/home/docker/cp-test_multinode-999895_multinode-999895-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 ssh -n multinode-999895 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 ssh -n multinode-999895-m02 "sudo cat /home/docker/cp-test_multinode-999895_multinode-999895-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 cp multinode-999895:/home/docker/cp-test.txt multinode-999895-m03:/home/docker/cp-test_multinode-999895_multinode-999895-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 ssh -n multinode-999895 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 ssh -n multinode-999895-m03 "sudo cat /home/docker/cp-test_multinode-999895_multinode-999895-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 cp testdata/cp-test.txt multinode-999895-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 ssh -n multinode-999895-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 cp multinode-999895-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3081298566/001/cp-test_multinode-999895-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 ssh -n multinode-999895-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 cp multinode-999895-m02:/home/docker/cp-test.txt multinode-999895:/home/docker/cp-test_multinode-999895-m02_multinode-999895.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 ssh -n multinode-999895-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 ssh -n multinode-999895 "sudo cat /home/docker/cp-test_multinode-999895-m02_multinode-999895.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 cp multinode-999895-m02:/home/docker/cp-test.txt multinode-999895-m03:/home/docker/cp-test_multinode-999895-m02_multinode-999895-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 ssh -n multinode-999895-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 ssh -n multinode-999895-m03 "sudo cat /home/docker/cp-test_multinode-999895-m02_multinode-999895-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 cp testdata/cp-test.txt multinode-999895-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 ssh -n multinode-999895-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 cp multinode-999895-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3081298566/001/cp-test_multinode-999895-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 ssh -n multinode-999895-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 cp multinode-999895-m03:/home/docker/cp-test.txt multinode-999895:/home/docker/cp-test_multinode-999895-m03_multinode-999895.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 ssh -n multinode-999895-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 ssh -n multinode-999895 "sudo cat /home/docker/cp-test_multinode-999895-m03_multinode-999895.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 cp multinode-999895-m03:/home/docker/cp-test.txt multinode-999895-m02:/home/docker/cp-test_multinode-999895-m03_multinode-999895-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 ssh -n multinode-999895-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 ssh -n multinode-999895-m02 "sudo cat /home/docker/cp-test_multinode-999895-m03_multinode-999895-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-999895 node stop m03: (1.667158602s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-999895 status: exit status 7 (341.284686ms)

                                                
                                                
-- stdout --
	multinode-999895
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-999895-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-999895-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-999895 status --alsologtostderr: exit status 7 (337.206747ms)

                                                
                                                
-- stdout --
	multinode-999895
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-999895-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-999895-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 02:57:10.514948  285111 out.go:360] Setting OutFile to fd 1 ...
	I1209 02:57:10.515200  285111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:57:10.515208  285111 out.go:374] Setting ErrFile to fd 2...
	I1209 02:57:10.515213  285111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 02:57:10.515416  285111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 02:57:10.515591  285111 out.go:368] Setting JSON to false
	I1209 02:57:10.515616  285111 mustload.go:66] Loading cluster: multinode-999895
	I1209 02:57:10.515697  285111 notify.go:221] Checking for updates...
	I1209 02:57:10.515966  285111 config.go:182] Loaded profile config "multinode-999895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 02:57:10.515981  285111 status.go:174] checking status of multinode-999895 ...
	I1209 02:57:10.517957  285111 status.go:371] multinode-999895 host status = "Running" (err=<nil>)
	I1209 02:57:10.517978  285111 host.go:66] Checking if "multinode-999895" exists ...
	I1209 02:57:10.520579  285111 main.go:143] libmachine: domain multinode-999895 has defined MAC address 52:54:00:ab:e5:fa in network mk-multinode-999895
	I1209 02:57:10.521144  285111 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:e5:fa", ip: ""} in network mk-multinode-999895: {Iface:virbr1 ExpiryTime:2025-12-09 03:54:48 +0000 UTC Type:0 Mac:52:54:00:ab:e5:fa Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:multinode-999895 Clientid:01:52:54:00:ab:e5:fa}
	I1209 02:57:10.521181  285111 main.go:143] libmachine: domain multinode-999895 has defined IP address 192.168.39.144 and MAC address 52:54:00:ab:e5:fa in network mk-multinode-999895
	I1209 02:57:10.521351  285111 host.go:66] Checking if "multinode-999895" exists ...
	I1209 02:57:10.521594  285111 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:57:10.523779  285111 main.go:143] libmachine: domain multinode-999895 has defined MAC address 52:54:00:ab:e5:fa in network mk-multinode-999895
	I1209 02:57:10.524159  285111 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:e5:fa", ip: ""} in network mk-multinode-999895: {Iface:virbr1 ExpiryTime:2025-12-09 03:54:48 +0000 UTC Type:0 Mac:52:54:00:ab:e5:fa Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:multinode-999895 Clientid:01:52:54:00:ab:e5:fa}
	I1209 02:57:10.524183  285111 main.go:143] libmachine: domain multinode-999895 has defined IP address 192.168.39.144 and MAC address 52:54:00:ab:e5:fa in network mk-multinode-999895
	I1209 02:57:10.524343  285111 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/multinode-999895/id_rsa Username:docker}
	I1209 02:57:10.607270  285111 ssh_runner.go:195] Run: systemctl --version
	I1209 02:57:10.614583  285111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:57:10.633903  285111 kubeconfig.go:125] found "multinode-999895" server: "https://192.168.39.144:8443"
	I1209 02:57:10.633947  285111 api_server.go:166] Checking apiserver status ...
	I1209 02:57:10.633988  285111 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 02:57:10.654623  285111 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1352/cgroup
	W1209 02:57:10.666938  285111 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1352/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1209 02:57:10.667000  285111 ssh_runner.go:195] Run: ls
	I1209 02:57:10.673249  285111 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I1209 02:57:10.678159  285111 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I1209 02:57:10.678192  285111 status.go:463] multinode-999895 apiserver status = Running (err=<nil>)
	I1209 02:57:10.678212  285111 status.go:176] multinode-999895 status: &{Name:multinode-999895 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:57:10.678233  285111 status.go:174] checking status of multinode-999895-m02 ...
	I1209 02:57:10.679857  285111 status.go:371] multinode-999895-m02 host status = "Running" (err=<nil>)
	I1209 02:57:10.679885  285111 host.go:66] Checking if "multinode-999895-m02" exists ...
	I1209 02:57:10.682253  285111 main.go:143] libmachine: domain multinode-999895-m02 has defined MAC address 52:54:00:b5:aa:d6 in network mk-multinode-999895
	I1209 02:57:10.682686  285111 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b5:aa:d6", ip: ""} in network mk-multinode-999895: {Iface:virbr1 ExpiryTime:2025-12-09 03:55:43 +0000 UTC Type:0 Mac:52:54:00:b5:aa:d6 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-999895-m02 Clientid:01:52:54:00:b5:aa:d6}
	I1209 02:57:10.682709  285111 main.go:143] libmachine: domain multinode-999895-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:b5:aa:d6 in network mk-multinode-999895
	I1209 02:57:10.682856  285111 host.go:66] Checking if "multinode-999895-m02" exists ...
	I1209 02:57:10.683097  285111 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 02:57:10.685075  285111 main.go:143] libmachine: domain multinode-999895-m02 has defined MAC address 52:54:00:b5:aa:d6 in network mk-multinode-999895
	I1209 02:57:10.685500  285111 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b5:aa:d6", ip: ""} in network mk-multinode-999895: {Iface:virbr1 ExpiryTime:2025-12-09 03:55:43 +0000 UTC Type:0 Mac:52:54:00:b5:aa:d6 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-999895-m02 Clientid:01:52:54:00:b5:aa:d6}
	I1209 02:57:10.685527  285111 main.go:143] libmachine: domain multinode-999895-m02 has defined IP address 192.168.39.109 and MAC address 52:54:00:b5:aa:d6 in network mk-multinode-999895
	I1209 02:57:10.685646  285111 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/multinode-999895-m02/id_rsa Username:docker}
	I1209 02:57:10.768729  285111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 02:57:10.785463  285111 status.go:176] multinode-999895-m02 status: &{Name:multinode-999895-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1209 02:57:10.785513  285111 status.go:174] checking status of multinode-999895-m03 ...
	I1209 02:57:10.787279  285111 status.go:371] multinode-999895-m03 host status = "Stopped" (err=<nil>)
	I1209 02:57:10.787302  285111 status.go:384] host is not running, skipping remaining checks
	I1209 02:57:10.787307  285111 status.go:176] multinode-999895-m03 status: &{Name:multinode-999895-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (45.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-999895 node start m03 -v=5 --alsologtostderr: (44.792648153s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (45.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (329.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-999895
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-999895
E1209 02:58:08.561628  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 02:59:39.454700  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-999895: (2m49.484517127s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-999895 --wait=true -v=5 --alsologtostderr
E1209 03:01:29.394072  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:03:08.553103  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-999895 --wait=true -v=5 --alsologtostderr: (2m39.83745414s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-999895
--- PASS: TestMultiNode/serial/RestartKeepsNodes (329.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-999895 node delete m03: (2.184480599s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (163.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 stop
E1209 03:04:39.455316  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-999895 stop: (2m42.870607421s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-999895 status: exit status 7 (68.873574ms)

                                                
                                                
-- stdout --
	multinode-999895
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-999895-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-999895 status --alsologtostderr: exit status 7 (69.677482ms)

                                                
                                                
-- stdout --
	multinode-999895
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-999895-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:06:11.310633  287583 out.go:360] Setting OutFile to fd 1 ...
	I1209 03:06:11.310733  287583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:06:11.310737  287583 out.go:374] Setting ErrFile to fd 2...
	I1209 03:06:11.310741  287583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:06:11.310935  287583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 03:06:11.311105  287583 out.go:368] Setting JSON to false
	I1209 03:06:11.311132  287583 mustload.go:66] Loading cluster: multinode-999895
	I1209 03:06:11.311303  287583 notify.go:221] Checking for updates...
	I1209 03:06:11.311460  287583 config.go:182] Loaded profile config "multinode-999895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 03:06:11.311501  287583 status.go:174] checking status of multinode-999895 ...
	I1209 03:06:11.314497  287583 status.go:371] multinode-999895 host status = "Stopped" (err=<nil>)
	I1209 03:06:11.314526  287583 status.go:384] host is not running, skipping remaining checks
	I1209 03:06:11.314531  287583 status.go:176] multinode-999895 status: &{Name:multinode-999895 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 03:06:11.314566  287583 status.go:174] checking status of multinode-999895-m02 ...
	I1209 03:06:11.315906  287583 status.go:371] multinode-999895-m02 host status = "Stopped" (err=<nil>)
	I1209 03:06:11.315982  287583 status.go:384] host is not running, skipping remaining checks
	I1209 03:06:11.315990  287583 status.go:176] multinode-999895-m02 status: &{Name:multinode-999895-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (163.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (120.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-999895 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1209 03:06:29.394904  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:07:42.531086  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:08:08.554026  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-999895 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m59.813878951s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-999895 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (120.38s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-999895
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-999895-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-999895-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (83.860255ms)

                                                
                                                
-- stdout --
	* [multinode-999895-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-999895-m02' is duplicated with machine name 'multinode-999895-m02' in profile 'multinode-999895'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-999895-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-999895-m03 --driver=kvm2  --container-runtime=crio: (39.619321735s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-999895
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-999895: exit status 80 (226.343012ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-999895 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-999895-m03 already exists in multinode-999895-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-999895-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.85s)

                                                
                                    
x
+
TestScheduledStopUnix (110.26s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-195755 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-195755 --memory=3072 --driver=kvm2  --container-runtime=crio: (38.538882107s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-195755 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1209 03:12:19.041952  290061 out.go:360] Setting OutFile to fd 1 ...
	I1209 03:12:19.042132  290061 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:12:19.042145  290061 out.go:374] Setting ErrFile to fd 2...
	I1209 03:12:19.042153  290061 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:12:19.042538  290061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 03:12:19.043014  290061 out.go:368] Setting JSON to false
	I1209 03:12:19.043154  290061 mustload.go:66] Loading cluster: scheduled-stop-195755
	I1209 03:12:19.043611  290061 config.go:182] Loaded profile config "scheduled-stop-195755": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 03:12:19.043729  290061 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/scheduled-stop-195755/config.json ...
	I1209 03:12:19.044010  290061 mustload.go:66] Loading cluster: scheduled-stop-195755
	I1209 03:12:19.044167  290061 config.go:182] Loaded profile config "scheduled-stop-195755": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-195755 -n scheduled-stop-195755
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-195755 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1209 03:12:19.351795  290106 out.go:360] Setting OutFile to fd 1 ...
	I1209 03:12:19.351915  290106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:12:19.351923  290106 out.go:374] Setting ErrFile to fd 2...
	I1209 03:12:19.351928  290106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:12:19.352110  290106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 03:12:19.352347  290106 out.go:368] Setting JSON to false
	I1209 03:12:19.352581  290106 daemonize_unix.go:73] killing process 290094 as it is an old scheduled stop
	I1209 03:12:19.352690  290106 mustload.go:66] Loading cluster: scheduled-stop-195755
	I1209 03:12:19.353047  290106 config.go:182] Loaded profile config "scheduled-stop-195755": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 03:12:19.353121  290106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/scheduled-stop-195755/config.json ...
	I1209 03:12:19.353295  290106 mustload.go:66] Loading cluster: scheduled-stop-195755
	I1209 03:12:19.353385  290106 config.go:182] Loaded profile config "scheduled-stop-195755": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1209 03:12:19.359746  258854 retry.go:31] will retry after 108.662µs: open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/scheduled-stop-195755/pid: no such file or directory
I1209 03:12:19.360907  258854 retry.go:31] will retry after 75.589µs: open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/scheduled-stop-195755/pid: no such file or directory
I1209 03:12:19.362048  258854 retry.go:31] will retry after 204.845µs: open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/scheduled-stop-195755/pid: no such file or directory
I1209 03:12:19.363199  258854 retry.go:31] will retry after 424.419µs: open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/scheduled-stop-195755/pid: no such file or directory
I1209 03:12:19.364350  258854 retry.go:31] will retry after 294.447µs: open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/scheduled-stop-195755/pid: no such file or directory
I1209 03:12:19.365499  258854 retry.go:31] will retry after 404.403µs: open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/scheduled-stop-195755/pid: no such file or directory
I1209 03:12:19.366651  258854 retry.go:31] will retry after 668.029µs: open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/scheduled-stop-195755/pid: no such file or directory
I1209 03:12:19.367795  258854 retry.go:31] will retry after 1.434736ms: open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/scheduled-stop-195755/pid: no such file or directory
I1209 03:12:19.370053  258854 retry.go:31] will retry after 3.230815ms: open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/scheduled-stop-195755/pid: no such file or directory
I1209 03:12:19.374284  258854 retry.go:31] will retry after 3.911226ms: open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/scheduled-stop-195755/pid: no such file or directory
I1209 03:12:19.378559  258854 retry.go:31] will retry after 6.711655ms: open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/scheduled-stop-195755/pid: no such file or directory
I1209 03:12:19.385838  258854 retry.go:31] will retry after 6.751268ms: open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/scheduled-stop-195755/pid: no such file or directory
I1209 03:12:19.393125  258854 retry.go:31] will retry after 10.762896ms: open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/scheduled-stop-195755/pid: no such file or directory
I1209 03:12:19.405123  258854 retry.go:31] will retry after 26.686138ms: open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/scheduled-stop-195755/pid: no such file or directory
I1209 03:12:19.432420  258854 retry.go:31] will retry after 32.455054ms: open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/scheduled-stop-195755/pid: no such file or directory
I1209 03:12:19.465803  258854 retry.go:31] will retry after 32.985414ms: open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/scheduled-stop-195755/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-195755 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-195755 -n scheduled-stop-195755
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-195755
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-195755 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1209 03:12:45.107812  290261 out.go:360] Setting OutFile to fd 1 ...
	I1209 03:12:45.108157  290261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:12:45.108180  290261 out.go:374] Setting ErrFile to fd 2...
	I1209 03:12:45.108187  290261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:12:45.108413  290261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 03:12:45.108701  290261 out.go:368] Setting JSON to false
	I1209 03:12:45.108783  290261 mustload.go:66] Loading cluster: scheduled-stop-195755
	I1209 03:12:45.109156  290261 config.go:182] Loaded profile config "scheduled-stop-195755": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 03:12:45.109231  290261 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/scheduled-stop-195755/config.json ...
	I1209 03:12:45.109421  290261 mustload.go:66] Loading cluster: scheduled-stop-195755
	I1209 03:12:45.109523  290261 config.go:182] Loaded profile config "scheduled-stop-195755": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1209 03:13:08.560381  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-195755
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-195755: exit status 7 (66.840682ms)

                                                
                                                
-- stdout --
	scheduled-stop-195755
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-195755 -n scheduled-stop-195755
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-195755 -n scheduled-stop-195755: exit status 7 (65.02394ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-195755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-195755
--- PASS: TestScheduledStopUnix (110.26s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (381.74s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1020809944 start -p running-upgrade-213301 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1020809944 start -p running-upgrade-213301 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m42.158800643s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-213301 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-213301 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (4m37.838809331s)
helpers_test.go:175: Cleaning up "running-upgrade-213301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-213301
--- PASS: TestRunningBinaryUpgrade (381.74s)

                                                
                                    
x
+
TestKubernetesUpgrade (96.27s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-321262 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-321262 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.151199252s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-321262
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-321262: (2.301686178s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-321262 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-321262 status --format={{.Host}}: exit status 7 (85.891212ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-321262 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1209 03:18:08.553867  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-321262 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (35.619808763s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-321262 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-321262 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-321262 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (92.82808ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-321262] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-321262
	    minikube start -p kubernetes-upgrade-321262 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3212622 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-321262 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-321262 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-321262 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (13.96178249s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-321262" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-321262
--- PASS: TestKubernetesUpgrade (96.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-992827 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-992827 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (95.444047ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-992827] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (81.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-992827 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-992827 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m21.259913284s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-992827 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (81.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (31.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-992827 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-992827 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (30.003513317s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-992827 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-992827 status -o json: exit status 2 (215.514615ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-992827","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-992827
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (31.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (47.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-992827 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-992827 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (47.964102061s)
--- PASS: TestNoKubernetes/serial/Start (47.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22081-254936/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-992827 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-992827 "sudo systemctl is-active --quiet service kubelet": exit status 1 (190.112929ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-992827
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-992827: (1.482216507s)
--- PASS: TestNoKubernetes/serial/Stop (1.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (35.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-992827 --driver=kvm2  --container-runtime=crio
E1209 03:16:29.394053  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-992827 --driver=kvm2  --container-runtime=crio: (35.123397088s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (35.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-992827 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-992827 "sudo systemctl is-active --quiet service kubelet": exit status 1 (217.916827ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestPause/serial/Start (106.44s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-739105 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-739105 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m46.436601739s)
--- PASS: TestPause/serial/Start (106.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (78.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1095554897 start -p stopped-upgrade-644254 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1095554897 start -p stopped-upgrade-644254 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (40.801084878s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1095554897 -p stopped-upgrade-644254 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1095554897 -p stopped-upgrade-644254 stop: (1.908703401s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-644254 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-644254 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (35.323179481s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (78.03s)

                                                
                                    
x
+
TestISOImage/Setup (25.01s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-607644 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-607644 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.013405863s)
--- PASS: TestISOImage/Setup (25.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-644254
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-644254: (1.575388371s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-298769 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-298769 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (168.622694ms)

                                                
                                                
-- stdout --
	* [false-298769] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:19:52.218906  296514 out.go:360] Setting OutFile to fd 1 ...
	I1209 03:19:52.219125  296514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:19:52.219143  296514 out.go:374] Setting ErrFile to fd 2...
	I1209 03:19:52.219149  296514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 03:19:52.219518  296514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
	I1209 03:19:52.220295  296514 out.go:368] Setting JSON to false
	I1209 03:19:52.221694  296514 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":32542,"bootTime":1765217850,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 03:19:52.221795  296514 start.go:143] virtualization: kvm guest
	I1209 03:19:52.224485  296514 out.go:179] * [false-298769] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1209 03:19:52.229213  296514 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 03:19:52.229236  296514 notify.go:221] Checking for updates...
	I1209 03:19:52.232177  296514 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:19:52.233603  296514 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
	I1209 03:19:52.235385  296514 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
	I1209 03:19:52.236861  296514 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 03:19:52.238281  296514 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:19:52.240375  296514 config.go:182] Loaded profile config "cert-expiration-699833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 03:19:52.240536  296514 config.go:182] Loaded profile config "guest-607644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1209 03:19:52.240666  296514 config.go:182] Loaded profile config "running-upgrade-213301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1209 03:19:52.240836  296514 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 03:19:52.291892  296514 out.go:179] * Using the kvm2 driver based on user configuration
	I1209 03:19:52.293394  296514 start.go:309] selected driver: kvm2
	I1209 03:19:52.293416  296514 start.go:927] validating driver "kvm2" against <nil>
	I1209 03:19:52.293431  296514 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:19:52.295979  296514 out.go:203] 
	W1209 03:19:52.297256  296514 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1209 03:19:52.298589  296514 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-298769 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-298769

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-298769

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-298769

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-298769

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-298769

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-298769

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-298769

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-298769

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-298769

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-298769

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-298769

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-298769" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-298769" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 09 Dec 2025 03:16:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.113:8443
name: cert-expiration-699833
contexts:
- context:
cluster: cert-expiration-699833
extensions:
- extension:
last-update: Tue, 09 Dec 2025 03:16:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-699833
name: cert-expiration-699833
current-context: ""
kind: Config
users:
- name: cert-expiration-699833
user:
client-certificate: /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/cert-expiration-699833/client.crt
client-key: /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/cert-expiration-699833/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-298769

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-298769"

                                                
                                                
----------------------- debugLogs end: false-298769 [took: 4.018602628s] --------------------------------
helpers_test.go:175: Cleaning up "false-298769" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-298769
--- PASS: TestNetworkPlugins/group/false (4.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (108.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-435592 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-435592 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m48.691747497s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (108.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (120.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-042483 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-042483 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (2m0.590230668s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (120.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (127.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-059151 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-059151 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (2m7.696409172s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (127.70s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-607644 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-607644 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-607644 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-607644 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-607644 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-607644 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-607644 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-607644 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-607644 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-607644 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-607644 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (141.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-621448 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1209 03:21:29.395007  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-621448 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (2m21.823872044s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (141.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-435592 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d1316f47-d0a2-4c7f-9f49-ba02e1a6d523] Pending
helpers_test.go:352: "busybox" [d1316f47-d0a2-4c7f-9f49-ba02e1a6d523] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d1316f47-d0a2-4c7f-9f49-ba02e1a6d523] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.0045542s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-435592 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-435592 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-435592 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.162141757s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-435592 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (81.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-435592 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-435592 --alsologtostderr -v=3: (1m21.681173859s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (81.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-042483 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [23d3cb12-881d-4563-93f9-a0f89ce489f0] Pending
helpers_test.go:352: "busybox" [23d3cb12-881d-4563-93f9-a0f89ce489f0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [23d3cb12-881d-4563-93f9-a0f89ce489f0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005193038s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-042483 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-059151 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d28574b6-8bd8-4f57-a2f3-4166df6742eb] Pending
helpers_test.go:352: "busybox" [d28574b6-8bd8-4f57-a2f3-4166df6742eb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d28574b6-8bd8-4f57-a2f3-4166df6742eb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004442847s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-059151 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-042483 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-042483 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.020277595s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-042483 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (73.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-042483 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-042483 --alsologtostderr -v=3: (1m13.115980901s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (73.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-059151 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-059151 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (86.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-059151 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-059151 --alsologtostderr -v=3: (1m26.62790529s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (86.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-621448 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [086117f2-a75f-4553-9a08-931cdccd5686] Pending
helpers_test.go:352: "busybox" [086117f2-a75f-4553-9a08-931cdccd5686] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [086117f2-a75f-4553-9a08-931cdccd5686] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004192504s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-621448 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-621448 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-621448 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (90.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-621448 --alsologtostderr -v=3
E1209 03:23:08.553152  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-621448 --alsologtostderr -v=3: (1m30.387621402s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (90.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-435592 -n old-k8s-version-435592
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-435592 -n old-k8s-version-435592: exit status 7 (68.393947ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-435592 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-435592 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-435592 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (45.670437251s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-435592 -n old-k8s-version-435592
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-042483 -n no-preload-042483
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-042483 -n no-preload-042483: exit status 7 (71.296798ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-042483 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (64.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-042483 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-042483 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m3.974538796s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-042483 -n no-preload-042483
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (64.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-059151 -n embed-certs-059151
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-059151 -n embed-certs-059151: exit status 7 (79.928958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-059151 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-059151 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-059151 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (52.695440014s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-059151 -n embed-certs-059151
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2prgg" [30efe557-0ae1-4724-9095-90621de87f36] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2prgg" [30efe557-0ae1-4724-9095-90621de87f36] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.005023132s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-621448 -n default-k8s-diff-port-621448
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-621448 -n default-k8s-diff-port-621448: exit status 7 (85.27688ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-621448 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-621448 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-621448 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (53.109661249s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-621448 -n default-k8s-diff-port-621448
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2prgg" [30efe557-0ae1-4724-9095-90621de87f36] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005996277s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-435592 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-435592 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-435592 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-435592 --alsologtostderr -v=1: (1.035728505s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-435592 -n old-k8s-version-435592
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-435592 -n old-k8s-version-435592: exit status 2 (313.653288ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-435592 -n old-k8s-version-435592
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-435592 -n old-k8s-version-435592: exit status 2 (304.984088ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-435592 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-435592 --alsologtostderr -v=1: (1.13548363s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-435592 -n old-k8s-version-435592
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-435592 -n old-k8s-version-435592
E1209 03:24:22.533145  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-653727 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-653727 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m0.209041676s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-9wtdv" [90c7122e-c3f9-47ce-a882-06d6b98f8fb3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-9wtdv" [90c7122e-c3f9-47ce-a882-06d6b98f8fb3] Running
E1209 03:24:39.455605  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-074400/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.008098531s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4qqvt" [6c5cd754-2389-4493-8af8-06c934319439] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4qqvt" [6c5cd754-2389-4493-8af8-06c934319439] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.005576917s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-9wtdv" [90c7122e-c3f9-47ce-a882-06d6b98f8fb3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006052104s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-042483 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-042483 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-042483 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-042483 --alsologtostderr -v=1: (1.075777604s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-042483 -n no-preload-042483
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-042483 -n no-preload-042483: exit status 2 (276.123003ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-042483 -n no-preload-042483
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-042483 -n no-preload-042483: exit status 2 (265.559357ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-042483 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-042483 -n no-preload-042483
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-042483 -n no-preload-042483
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (93.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-298769 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-298769 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m33.948726925s)
--- PASS: TestNetworkPlugins/group/auto/Start (93.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4qqvt" [6c5cd754-2389-4493-8af8-06c934319439] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006813754s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-059151 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-059151 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-059151 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-059151 --alsologtostderr -v=1: (1.148564266s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-059151 -n embed-certs-059151
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-059151 -n embed-certs-059151: exit status 2 (315.843855ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-059151 -n embed-certs-059151
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-059151 -n embed-certs-059151: exit status 2 (300.927359ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-059151 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-059151 --alsologtostderr -v=1: (1.043193191s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-059151 -n embed-certs-059151
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-059151 -n embed-certs-059151
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bt78s" [773d9b85-7c46-4fef-97ea-2687b3c9f98b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.035958368s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (74.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-298769 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-298769 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m14.548234205s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (74.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bt78s" [773d9b85-7c46-4fef-97ea-2687b3c9f98b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006077529s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-621448 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-621448 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-621448 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-621448 -n default-k8s-diff-port-621448
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-621448 -n default-k8s-diff-port-621448: exit status 2 (242.076793ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-621448 -n default-k8s-diff-port-621448
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-621448 -n default-k8s-diff-port-621448: exit status 2 (268.345011ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-621448 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-621448 -n default-k8s-diff-port-621448
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-621448 -n default-k8s-diff-port-621448
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (116s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-298769 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-298769 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m55.995045956s)
--- PASS: TestNetworkPlugins/group/calico/Start (116.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-653727 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-653727 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.506271382s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-653727 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-653727 --alsologtostderr -v=3: (8.490856624s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-653727 -n newest-cni-653727
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-653727 -n newest-cni-653727: exit status 7 (91.39294ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-653727 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (68.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-653727 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1209 03:26:11.634735  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-653727 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m8.542989516s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-653727 -n newest-cni-653727
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (68.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-fj2md" [58131abb-349f-4dcb-a0f4-05f00d634e9f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006777629s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-298769 "pgrep -a kubelet"
I1209 03:26:26.815404  258854 config.go:182] Loaded profile config "auto-298769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-298769 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-z4drn" [0f2be79c-7437-419e-864e-514a0bf0fad9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-z4drn" [0f2be79c-7437-419e-864e-514a0bf0fad9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.005981951s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-298769 "pgrep -a kubelet"
I1209 03:26:27.570888  258854 config.go:182] Loaded profile config "kindnet-298769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-298769 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vbc7m" [e15ff00a-a9af-48ea-8b7a-f4e4ddfc14fb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1209 03:26:29.394369  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-vbc7m" [e15ff00a-a9af-48ea-8b7a-f4e4ddfc14fb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005774791s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-298769 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-298769 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-298769 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-298769 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-298769 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-298769 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-653727 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-653727 --alsologtostderr -v=1
E1209 03:26:43.862937  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/old-k8s-version-435592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-653727 --alsologtostderr -v=1: (1.424682193s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-653727 -n newest-cni-653727
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-653727 -n newest-cni-653727: exit status 2 (351.847201ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-653727 -n newest-cni-653727
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-653727 -n newest-cni-653727: exit status 2 (332.164865ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-653727 --alsologtostderr -v=1
E1209 03:26:46.425176  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/old-k8s-version-435592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-653727 --alsologtostderr -v=1: (1.344479404s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-653727 -n newest-cni-653727
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-653727 -n newest-cni-653727
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (72.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-298769 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1209 03:26:51.547139  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/old-k8s-version-435592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-298769 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m12.366175573s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (72.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (74.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-298769 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-298769 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m14.263843382s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (74.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (110.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-298769 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1209 03:27:01.775340  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/no-preload-042483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:01.781893  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/no-preload-042483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:01.789401  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/old-k8s-version-435592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:01.793923  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/no-preload-042483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:01.815479  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/no-preload-042483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:01.857058  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/no-preload-042483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:01.938632  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/no-preload-042483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:02.100414  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/no-preload-042483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:02.422143  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/no-preload-042483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:03.064527  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/no-preload-042483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:04.346367  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/no-preload-042483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:06.908548  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/no-preload-042483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:12.030214  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/no-preload-042483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-298769 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m50.129037472s)
--- PASS: TestNetworkPlugins/group/flannel/Start (110.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-hhlzb" [8c80029e-028e-4026-b588-286fc9cdfa64] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005306752s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-298769 "pgrep -a kubelet"
I1209 03:27:21.133550  258854 config.go:182] Loaded profile config "calico-298769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-298769 replace --force -f testdata/netcat-deployment.yaml
I1209 03:27:21.884519  258854 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b742p" [3e6c5cc8-f103-4ba7-8aee-492554bc1ec0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1209 03:27:22.271757  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/no-preload-042483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:22.271810  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/old-k8s-version-435592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-b742p" [3e6c5cc8-f103-4ba7-8aee-492554bc1ec0] Running
E1209 03:27:28.083205  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/default-k8s-diff-port-621448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:28.089786  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/default-k8s-diff-port-621448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:28.101479  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/default-k8s-diff-port-621448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:28.123149  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/default-k8s-diff-port-621448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:28.164741  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/default-k8s-diff-port-621448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:28.246332  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/default-k8s-diff-port-621448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:28.407949  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/default-k8s-diff-port-621448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:28.729969  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/default-k8s-diff-port-621448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:29.371375  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/default-k8s-diff-port-621448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:27:30.652737  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/default-k8s-diff-port-621448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.008828279s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-298769 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-298769 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1209 03:27:33.214443  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/default-k8s-diff-port-621448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-298769 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (90.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-298769 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1209 03:27:52.465643  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/functional-545294/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-298769 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m30.790901533s)
--- PASS: TestNetworkPlugins/group/bridge/Start (90.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-298769 "pgrep -a kubelet"
I1209 03:28:02.087796  258854 config.go:182] Loaded profile config "custom-flannel-298769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-298769 replace --force -f testdata/netcat-deployment.yaml
I1209 03:28:02.799299  258854 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1209 03:28:02.802707  258854 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1209 03:28:02.833720  258854 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fhhjf" [17769bc0-2304-4254-b498-ce1ddd802077] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1209 03:28:03.233848  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/old-k8s-version-435592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 03:28:08.553139  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-fhhjf" [17769bc0-2304-4254-b498-ce1ddd802077] Running
E1209 03:28:09.060367  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/default-k8s-diff-port-621448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004752574s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-298769 "pgrep -a kubelet"
I1209 03:28:10.720873  258854 config.go:182] Loaded profile config "enable-default-cni-298769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-298769 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6vp8k" [015a9e85-cfa9-4f49-b7e7-96d7d20bcfbb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6vp8k" [015a9e85-cfa9-4f49-b7e7-96d7d20bcfbb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.01563637s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-298769 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-298769 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-298769 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-298769 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-298769 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-298769 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-607644 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-607644 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-607644 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-607644 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-607644 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-607644 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-607644 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.20s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.18s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-607644 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   commit: 0d7c1d9864cc7aa82e32494e32331ce8be405026
iso_test.go:118:   iso_version: v1.37.0-1765151505-21409
iso_test.go:118:   kicbase_version: v0.0.48-1764843390-22032
iso_test.go:118:   minikube_version: v1.37.0
--- PASS: TestISOImage/VersionJSON (0.18s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.18s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-607644 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-9mv2l" [585d5827-565a-4c29-a66a-16b9ec6c38c7] Running
E1209 03:28:50.021740  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/default-k8s-diff-port-621448/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004041116s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-298769 "pgrep -a kubelet"
I1209 03:28:53.654016  258854 config.go:182] Loaded profile config "flannel-298769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-298769 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-74c89" [8e4067e2-5987-40fc-98aa-44927e2f9dcb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-74c89" [8e4067e2-5987-40fc-98aa-44927e2f9dcb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.006109915s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-298769 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-298769 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-298769 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-298769 "pgrep -a kubelet"
I1209 03:29:22.421484  258854 config.go:182] Loaded profile config "bridge-298769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-298769 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tg9gj" [045d2931-efa2-418d-9edc-5290cf528c56] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1209 03:29:25.156021  258854 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/old-k8s-version-435592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-tg9gj" [045d2931-efa2-418d-9edc-5290cf528c56] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004947128s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-298769 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-298769 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-298769 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (52/431)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.33
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.02
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
371 TestStartStop/group/disable-driver-mounts 0.18
382 TestNetworkPlugins/group/kubenet 4.04
392 TestNetworkPlugins/group/cilium 4.49
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:910: skipping: crio not supported
addons_test.go:1113: (dbg) Run:  out/minikube-linux-amd64 -p addons-712341 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:819: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:543: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1093: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-594162" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-594162
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-298769 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-298769

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-298769

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-298769

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-298769

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-298769

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-298769

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-298769

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-298769

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-298769

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-298769

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-298769

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-298769" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-298769" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 09 Dec 2025 03:16:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.113:8443
name: cert-expiration-699833
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 09 Dec 2025 03:15:31 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.194:8443
name: running-upgrade-213301
contexts:
- context:
cluster: cert-expiration-699833
extensions:
- extension:
last-update: Tue, 09 Dec 2025 03:16:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-699833
name: cert-expiration-699833
- context:
cluster: running-upgrade-213301
user: running-upgrade-213301
name: running-upgrade-213301
current-context: ""
kind: Config
users:
- name: cert-expiration-699833
user:
client-certificate: /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/cert-expiration-699833/client.crt
client-key: /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/cert-expiration-699833/client.key
- name: running-upgrade-213301
user:
client-certificate: /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/running-upgrade-213301/client.crt
client-key: /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/running-upgrade-213301/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-298769

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-298769"

                                                
                                                
----------------------- debugLogs end: kubenet-298769 [took: 3.823922379s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-298769" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-298769
--- SKIP: TestNetworkPlugins/group/kubenet (4.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-298769 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-298769

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-298769

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-298769

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-298769

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-298769

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-298769

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-298769

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-298769

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-298769

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-298769

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-298769

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-298769" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-298769

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-298769

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-298769

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-298769

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-298769" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-298769" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 09 Dec 2025 03:16:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.113:8443
name: cert-expiration-699833
contexts:
- context:
cluster: cert-expiration-699833
extensions:
- extension:
last-update: Tue, 09 Dec 2025 03:16:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-699833
name: cert-expiration-699833
current-context: ""
kind: Config
users:
- name: cert-expiration-699833
user:
client-certificate: /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/cert-expiration-699833/client.crt
client-key: /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/cert-expiration-699833/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-298769

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-298769" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298769"

                                                
                                                
----------------------- debugLogs end: cilium-298769 [took: 4.293003246s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-298769" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-298769
--- SKIP: TestNetworkPlugins/group/cilium (4.49s)

                                                
                                    
Copied to clipboard