Test Report: KVM_Linux_crio 22127

                    
                      087e852008767f332c662fe76eaa150bb5f9e6c8:2025-12-13:42757
                    
                

Test fail (14/431)

x
+
TestAddons/parallel/Ingress (159.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-246361 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-246361 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-246361 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [6b69c078-1088-484d-990b-d8794ed9b2c6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [6b69c078-1088-484d-990b-d8794ed9b2c6] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003936538s
I1213 09:14:26.702361  391877 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-246361 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-246361 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.197128231s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-246361 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-246361 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.185
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-246361 -n addons-246361
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-246361 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-246361 logs -n 25: (1.198393991s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-553660                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-553660 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ --download-only -p binary-mirror-573687 --alsologtostderr --binary-mirror http://127.0.0.1:35927 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-573687 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ delete  │ -p binary-mirror-573687                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-573687 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p addons-246361                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ disable dashboard -p addons-246361                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ start   │ -p addons-246361 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:13 UTC │
	│ addons  │ addons-246361 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:13 UTC │ 13 Dec 25 09:13 UTC │
	│ addons  │ addons-246361 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
	│ addons  │ enable headlamp -p addons-246361 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
	│ addons  │ addons-246361 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
	│ ssh     │ addons-246361 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │                     │
	│ addons  │ addons-246361 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
	│ addons  │ addons-246361 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
	│ ip      │ addons-246361 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
	│ addons  │ addons-246361 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
	│ addons  │ addons-246361 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-246361                                                                                                                                                                                                                                                                                                                                                                                         │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
	│ addons  │ addons-246361 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
	│ addons  │ addons-246361 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
	│ ssh     │ addons-246361 ssh cat /opt/local-path-provisioner/pvc-b8114b46-aff7-41f0-9a17-c8dadafee4e6_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
	│ addons  │ addons-246361 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:15 UTC │
	│ addons  │ addons-246361 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
	│ addons  │ addons-246361 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:15 UTC │ 13 Dec 25 09:15 UTC │
	│ addons  │ addons-246361 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:15 UTC │ 13 Dec 25 09:15 UTC │
	│ ip      │ addons-246361 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-246361        │ jenkins │ v1.37.0 │ 13 Dec 25 09:16 UTC │ 13 Dec 25 09:16 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:11:46
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:11:46.953001  392700 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:11:46.953255  392700 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:46.953265  392700 out.go:374] Setting ErrFile to fd 2...
	I1213 09:11:46.953270  392700 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:46.953483  392700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 09:11:46.954002  392700 out.go:368] Setting JSON to false
	I1213 09:11:46.954894  392700 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3256,"bootTime":1765613851,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:11:46.954956  392700 start.go:143] virtualization: kvm guest
	I1213 09:11:46.957081  392700 out.go:179] * [addons-246361] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:11:46.958544  392700 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 09:11:46.958548  392700 notify.go:221] Checking for updates...
	I1213 09:11:46.961364  392700 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:11:46.962667  392700 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 09:11:46.964100  392700 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	I1213 09:11:46.965372  392700 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:11:46.966621  392700 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:11:46.968029  392700 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:11:46.999316  392700 out.go:179] * Using the kvm2 driver based on user configuration
	I1213 09:11:47.000473  392700 start.go:309] selected driver: kvm2
	I1213 09:11:47.000496  392700 start.go:927] validating driver "kvm2" against <nil>
	I1213 09:11:47.000508  392700 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:11:47.001189  392700 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 09:11:47.001452  392700 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:11:47.001477  392700 cni.go:84] Creating CNI manager for ""
	I1213 09:11:47.001524  392700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 09:11:47.001534  392700 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 09:11:47.001579  392700 start.go:353] cluster config:
	{Name:addons-246361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-246361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1213 09:11:47.001664  392700 iso.go:125] acquiring lock: {Name:mk4ce8bfab58620efe86d1c7a68d79ed9c81b6ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:11:47.003178  392700 out.go:179] * Starting "addons-246361" primary control-plane node in "addons-246361" cluster
	I1213 09:11:47.004249  392700 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:11:47.004279  392700 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-387918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 09:11:47.004286  392700 cache.go:65] Caching tarball of preloaded images
	I1213 09:11:47.004378  392700 preload.go:238] Found /home/jenkins/minikube-integration/22127-387918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 09:11:47.004389  392700 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 09:11:47.004695  392700 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/config.json ...
	I1213 09:11:47.004719  392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/config.json: {Name:mkf301320877bad44745f7d6b1089c83541b6e85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:47.004892  392700 start.go:360] acquireMachinesLock for addons-246361: {Name:mk911c6c71130df32abbe489ec2f7be251c727ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 09:11:47.004941  392700 start.go:364] duration metric: took 34.738µs to acquireMachinesLock for "addons-246361"
	I1213 09:11:47.004960  392700 start.go:93] Provisioning new machine with config: &{Name:addons-246361 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-246361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 09:11:47.005019  392700 start.go:125] createHost starting for "" (driver="kvm2")
	I1213 09:11:47.006513  392700 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1213 09:11:47.006683  392700 start.go:159] libmachine.API.Create for "addons-246361" (driver="kvm2")
	I1213 09:11:47.006714  392700 client.go:173] LocalClient.Create starting
	I1213 09:11:47.006817  392700 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem
	I1213 09:11:47.114705  392700 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/cert.pem
	I1213 09:11:47.172692  392700 main.go:143] libmachine: creating domain...
	I1213 09:11:47.172717  392700 main.go:143] libmachine: creating network...
	I1213 09:11:47.174220  392700 main.go:143] libmachine: found existing default network
	I1213 09:11:47.174518  392700 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1213 09:11:47.175188  392700 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed88f0}
	I1213 09:11:47.175312  392700 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-246361</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1213 09:11:47.181642  392700 main.go:143] libmachine: creating private network mk-addons-246361 192.168.39.0/24...
	I1213 09:11:47.252168  392700 main.go:143] libmachine: private network mk-addons-246361 192.168.39.0/24 created
	I1213 09:11:47.252468  392700 main.go:143] libmachine: <network>
	  <name>mk-addons-246361</name>
	  <uuid>e7255bda-accc-46cf-a38c-4f99131fe471</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:a7:cb:c4'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1213 09:11:47.252503  392700 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361 ...
	I1213 09:11:47.252533  392700 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22127-387918/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso
	I1213 09:11:47.252548  392700 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22127-387918/.minikube
	I1213 09:11:47.252665  392700 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22127-387918/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22127-387918/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso...
	I1213 09:11:47.516414  392700 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa...
	I1213 09:11:47.672714  392700 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/addons-246361.rawdisk...
	I1213 09:11:47.672766  392700 main.go:143] libmachine: Writing magic tar header
	I1213 09:11:47.672802  392700 main.go:143] libmachine: Writing SSH key tar header
	I1213 09:11:47.672879  392700 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361 ...
	I1213 09:11:47.672939  392700 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361
	I1213 09:11:47.672962  392700 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361 (perms=drwx------)
	I1213 09:11:47.672972  392700 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22127-387918/.minikube/machines
	I1213 09:11:47.672981  392700 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22127-387918/.minikube/machines (perms=drwxr-xr-x)
	I1213 09:11:47.672992  392700 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22127-387918/.minikube
	I1213 09:11:47.673010  392700 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22127-387918/.minikube (perms=drwxr-xr-x)
	I1213 09:11:47.673020  392700 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22127-387918
	I1213 09:11:47.673031  392700 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22127-387918 (perms=drwxrwxr-x)
	I1213 09:11:47.673041  392700 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1213 09:11:47.673055  392700 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1213 09:11:47.673070  392700 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1213 09:11:47.673084  392700 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1213 09:11:47.673124  392700 main.go:143] libmachine: checking permissions on dir: /home
	I1213 09:11:47.673139  392700 main.go:143] libmachine: skipping /home - not owner
	I1213 09:11:47.673144  392700 main.go:143] libmachine: defining domain...
	I1213 09:11:47.674523  392700 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-246361</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/addons-246361.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-246361'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1213 09:11:47.683915  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:ee:7c:cf in network default
	I1213 09:11:47.684655  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:11:47.684677  392700 main.go:143] libmachine: starting domain...
	I1213 09:11:47.684681  392700 main.go:143] libmachine: ensuring networks are active...
	I1213 09:11:47.685511  392700 main.go:143] libmachine: Ensuring network default is active
	I1213 09:11:47.685936  392700 main.go:143] libmachine: Ensuring network mk-addons-246361 is active
	I1213 09:11:47.686562  392700 main.go:143] libmachine: getting domain XML...
	I1213 09:11:47.687604  392700 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-246361</name>
	  <uuid>27894c69-ae15-4bb1-a762-2eea43d7ca9d</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/addons-246361.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:2b:24:a6'/>
	      <source network='mk-addons-246361'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:ee:7c:cf'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1213 09:11:48.993826  392700 main.go:143] libmachine: waiting for domain to start...
	I1213 09:11:48.995270  392700 main.go:143] libmachine: domain is now running
	I1213 09:11:48.995297  392700 main.go:143] libmachine: waiting for IP...
	I1213 09:11:48.996059  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:11:48.996619  392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
	I1213 09:11:48.996633  392700 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:11:48.996967  392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
	I1213 09:11:48.997028  392700 retry.go:31] will retry after 218.800416ms: waiting for domain to come up
	I1213 09:11:49.217537  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:11:49.218123  392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
	I1213 09:11:49.218141  392700 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:11:49.218453  392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
	I1213 09:11:49.218514  392700 retry.go:31] will retry after 270.803348ms: waiting for domain to come up
	I1213 09:11:49.491302  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:11:49.491900  392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
	I1213 09:11:49.491922  392700 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:11:49.492318  392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
	I1213 09:11:49.492361  392700 retry.go:31] will retry after 361.360348ms: waiting for domain to come up
	I1213 09:11:49.855158  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:11:49.855771  392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
	I1213 09:11:49.855791  392700 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:11:49.856123  392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
	I1213 09:11:49.856169  392700 retry.go:31] will retry after 523.235093ms: waiting for domain to come up
	I1213 09:11:50.380880  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:11:50.381340  392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
	I1213 09:11:50.381358  392700 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:11:50.381604  392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
	I1213 09:11:50.381649  392700 retry.go:31] will retry after 458.959376ms: waiting for domain to come up
	I1213 09:11:50.842674  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:11:50.843207  392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
	I1213 09:11:50.843223  392700 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:11:50.843565  392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
	I1213 09:11:50.843617  392700 retry.go:31] will retry after 910.968695ms: waiting for domain to come up
	I1213 09:11:51.755732  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:11:51.756361  392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
	I1213 09:11:51.756379  392700 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:11:51.756683  392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
	I1213 09:11:51.756726  392700 retry.go:31] will retry after 919.479091ms: waiting for domain to come up
	I1213 09:11:52.677919  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:11:52.678554  392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
	I1213 09:11:52.678572  392700 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:11:52.678909  392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
	I1213 09:11:52.678951  392700 retry.go:31] will retry after 945.042693ms: waiting for domain to come up
	I1213 09:11:53.626197  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:11:53.626896  392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
	I1213 09:11:53.626916  392700 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:11:53.627220  392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
	I1213 09:11:53.627262  392700 retry.go:31] will retry after 1.295865151s: waiting for domain to come up
	I1213 09:11:54.924780  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:11:54.925369  392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
	I1213 09:11:54.925386  392700 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:11:54.925696  392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
	I1213 09:11:54.925738  392700 retry.go:31] will retry after 2.283738815s: waiting for domain to come up
	I1213 09:11:57.210973  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:11:57.211665  392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
	I1213 09:11:57.211717  392700 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:11:57.212170  392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
	I1213 09:11:57.212214  392700 retry.go:31] will retry after 1.761254796s: waiting for domain to come up
	I1213 09:11:58.976540  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:11:58.977240  392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
	I1213 09:11:58.977265  392700 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:11:58.977586  392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
	I1213 09:11:58.977630  392700 retry.go:31] will retry after 2.837727411s: waiting for domain to come up
	I1213 09:12:01.818582  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:01.819082  392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
	I1213 09:12:01.819098  392700 main.go:143] libmachine: trying to list again with source=arp
	I1213 09:12:01.819392  392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
	I1213 09:12:01.819433  392700 retry.go:31] will retry after 3.284023822s: waiting for domain to come up
	I1213 09:12:05.107142  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:05.107836  392700 main.go:143] libmachine: domain addons-246361 has current primary IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:05.107852  392700 main.go:143] libmachine: found domain IP: 192.168.39.185
	I1213 09:12:05.107860  392700 main.go:143] libmachine: reserving static IP address...
	I1213 09:12:05.108333  392700 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-246361", mac: "52:54:00:2b:24:a6", ip: "192.168.39.185"} in network mk-addons-246361
	I1213 09:12:05.312161  392700 main.go:143] libmachine: reserved static IP address 192.168.39.185 for domain addons-246361
	I1213 09:12:05.312194  392700 main.go:143] libmachine: waiting for SSH...
	I1213 09:12:05.312202  392700 main.go:143] libmachine: Getting to WaitForSSH function...
	I1213 09:12:05.314966  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:05.315529  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:05.315569  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:05.315858  392700 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:05.316182  392700 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1213 09:12:05.316197  392700 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1213 09:12:05.428517  392700 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:12:05.428931  392700 main.go:143] libmachine: domain creation complete
	I1213 09:12:05.430388  392700 machine.go:94] provisionDockerMachine start ...
	I1213 09:12:05.433139  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:05.433592  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:05.433614  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:05.433805  392700 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:05.434024  392700 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1213 09:12:05.434034  392700 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:12:05.546519  392700 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 09:12:05.546565  392700 buildroot.go:166] provisioning hostname "addons-246361"
	I1213 09:12:05.549531  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:05.549940  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:05.549969  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:05.550169  392700 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:05.550402  392700 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1213 09:12:05.550418  392700 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-246361 && echo "addons-246361" | sudo tee /etc/hostname
	I1213 09:12:05.688594  392700 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-246361
	
	I1213 09:12:05.692571  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:05.693220  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:05.693262  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:05.693512  392700 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:05.693738  392700 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1213 09:12:05.693779  392700 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-246361' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-246361/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-246361' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:12:05.813299  392700 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:12:05.813361  392700 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22127-387918/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-387918/.minikube}
	I1213 09:12:05.813392  392700 buildroot.go:174] setting up certificates
	I1213 09:12:05.813403  392700 provision.go:84] configureAuth start
	I1213 09:12:05.816473  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:05.816881  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:05.816913  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:05.819100  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:05.819451  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:05.819474  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:05.819589  392700 provision.go:143] copyHostCerts
	I1213 09:12:05.819665  392700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-387918/.minikube/ca.pem (1078 bytes)
	I1213 09:12:05.819838  392700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-387918/.minikube/cert.pem (1123 bytes)
	I1213 09:12:05.819904  392700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-387918/.minikube/key.pem (1675 bytes)
	I1213 09:12:05.819957  392700 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-387918/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca-key.pem org=jenkins.addons-246361 san=[127.0.0.1 192.168.39.185 addons-246361 localhost minikube]
	I1213 09:12:05.945888  392700 provision.go:177] copyRemoteCerts
	I1213 09:12:05.945962  392700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:12:05.948610  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:05.948996  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:05.949019  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:05.949203  392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
	I1213 09:12:06.034349  392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 09:12:06.063967  392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 09:12:06.093197  392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1213 09:12:06.123178  392700 provision.go:87] duration metric: took 309.747511ms to configureAuth
	I1213 09:12:06.123207  392700 buildroot.go:189] setting minikube options for container-runtime
	I1213 09:12:06.123410  392700 config.go:182] Loaded profile config "addons-246361": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:12:06.127028  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:06.127529  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:06.127571  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:06.127819  392700 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:06.128034  392700 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1213 09:12:06.128050  392700 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 09:12:06.362169  392700 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 09:12:06.362202  392700 machine.go:97] duration metric: took 931.795471ms to provisionDockerMachine
	I1213 09:12:06.362213  392700 client.go:176] duration metric: took 19.355494352s to LocalClient.Create
	I1213 09:12:06.362233  392700 start.go:167] duration metric: took 19.355549599s to libmachine.API.Create "addons-246361"
	I1213 09:12:06.362244  392700 start.go:293] postStartSetup for "addons-246361" (driver="kvm2")
	I1213 09:12:06.362258  392700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:12:06.362390  392700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:12:06.365396  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:06.365868  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:06.365898  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:06.366081  392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
	I1213 09:12:06.456866  392700 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:12:06.462096  392700 info.go:137] Remote host: Buildroot 2025.02
	I1213 09:12:06.462139  392700 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-387918/.minikube/addons for local assets ...
	I1213 09:12:06.462228  392700 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-387918/.minikube/files for local assets ...
	I1213 09:12:06.462255  392700 start.go:296] duration metric: took 100.003451ms for postStartSetup
	I1213 09:12:06.465450  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:06.465846  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:06.465879  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:06.466120  392700 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/config.json ...
	I1213 09:12:06.466372  392700 start.go:128] duration metric: took 19.461339964s to createHost
	I1213 09:12:06.468454  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:06.468787  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:06.468815  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:06.468991  392700 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:06.469212  392700 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1213 09:12:06.469223  392700 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1213 09:12:06.577761  392700 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765617126.545180359
	
	I1213 09:12:06.577788  392700 fix.go:216] guest clock: 1765617126.545180359
	I1213 09:12:06.577797  392700 fix.go:229] Guest: 2025-12-13 09:12:06.545180359 +0000 UTC Remote: 2025-12-13 09:12:06.466386774 +0000 UTC m=+19.562568069 (delta=78.793585ms)
	I1213 09:12:06.577822  392700 fix.go:200] guest clock delta is within tolerance: 78.793585ms
	I1213 09:12:06.577829  392700 start.go:83] releasing machines lock for "addons-246361", held for 19.572878213s
	I1213 09:12:06.580889  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:06.581314  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:06.581353  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:06.581916  392700 ssh_runner.go:195] Run: cat /version.json
	I1213 09:12:06.581997  392700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:12:06.585261  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:06.585295  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:06.585742  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:06.585756  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:06.585776  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:06.585775  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:06.585994  392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
	I1213 09:12:06.585999  392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
	I1213 09:12:06.688401  392700 ssh_runner.go:195] Run: systemctl --version
	I1213 09:12:06.694893  392700 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 09:12:06.853274  392700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:12:06.859776  392700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:12:06.859850  392700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:12:06.880046  392700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 09:12:06.880075  392700 start.go:496] detecting cgroup driver to use...
	I1213 09:12:06.880145  392700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 09:12:06.900037  392700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:12:06.917073  392700 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:12:06.917159  392700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:12:06.934984  392700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:12:06.951958  392700 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:12:07.099427  392700 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:12:07.312861  392700 docker.go:234] disabling docker service ...
	I1213 09:12:07.312937  392700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:12:07.329221  392700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:12:07.345058  392700 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:12:07.498908  392700 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:12:07.638431  392700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:12:07.653883  392700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:12:07.676228  392700 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 09:12:07.676303  392700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:07.688569  392700 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 09:12:07.688655  392700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:07.703485  392700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:07.716470  392700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:07.729815  392700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:12:07.744045  392700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:07.756792  392700 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:07.777883  392700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:07.790572  392700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:12:07.801505  392700 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 09:12:07.801581  392700 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 09:12:07.822519  392700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:12:07.835368  392700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:12:07.982998  392700 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 09:12:08.095368  392700 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 09:12:08.095481  392700 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 09:12:08.101267  392700 start.go:564] Will wait 60s for crictl version
	I1213 09:12:08.101403  392700 ssh_runner.go:195] Run: which crictl
	I1213 09:12:08.105718  392700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 09:12:08.141983  392700 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 09:12:08.142145  392700 ssh_runner.go:195] Run: crio --version
	I1213 09:12:08.171160  392700 ssh_runner.go:195] Run: crio --version
	I1213 09:12:08.201894  392700 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1213 09:12:08.206180  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:08.206583  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:08.206607  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:08.206826  392700 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 09:12:08.211573  392700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:12:08.227192  392700 kubeadm.go:884] updating cluster {Name:addons-246361 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-246361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:12:08.227381  392700 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:12:08.227450  392700 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:12:08.265582  392700 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1213 09:12:08.265672  392700 ssh_runner.go:195] Run: which lz4
	I1213 09:12:08.270230  392700 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 09:12:08.275131  392700 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 09:12:08.275178  392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1213 09:12:09.527842  392700 crio.go:462] duration metric: took 1.257648109s to copy over tarball
	I1213 09:12:09.527970  392700 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 09:12:11.010824  392700 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.482811625s)
	I1213 09:12:11.010864  392700 crio.go:469] duration metric: took 1.482989092s to extract the tarball
	I1213 09:12:11.010876  392700 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 09:12:11.047375  392700 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:12:11.091571  392700 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:12:11.091605  392700 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:12:11.091617  392700 kubeadm.go:935] updating node { 192.168.39.185 8443 v1.34.2 crio true true} ...
	I1213 09:12:11.091754  392700 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-246361 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-246361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:12:11.091833  392700 ssh_runner.go:195] Run: crio config
	I1213 09:12:11.139099  392700 cni.go:84] Creating CNI manager for ""
	I1213 09:12:11.139129  392700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 09:12:11.139153  392700 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 09:12:11.139177  392700 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-246361 NodeName:addons-246361 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:12:11.139296  392700 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-246361"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.185"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:12:11.139379  392700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 09:12:11.152394  392700 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:12:11.152483  392700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:12:11.165051  392700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1213 09:12:11.186035  392700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 09:12:11.206206  392700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1213 09:12:11.227252  392700 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I1213 09:12:11.231476  392700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.185	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:12:11.245876  392700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:12:11.388594  392700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:12:11.419994  392700 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361 for IP: 192.168.39.185
	I1213 09:12:11.420037  392700 certs.go:195] generating shared ca certs ...
	I1213 09:12:11.420056  392700 certs.go:227] acquiring lock for ca certs: {Name:mkd63ae6418df38b62936a9f8faa40fdd87e4397 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:11.420235  392700 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-387918/.minikube/ca.key
	I1213 09:12:11.490308  392700 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt ...
	I1213 09:12:11.490357  392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt: {Name:mkf3d78756412421f921ae57a0b47cb7979b33b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:11.490556  392700 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-387918/.minikube/ca.key ...
	I1213 09:12:11.490569  392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/ca.key: {Name:mk7072f2cd64776d50132ee3ce97378f6d0dff62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:11.490677  392700 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.key
	I1213 09:12:11.528371  392700 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.crt ...
	I1213 09:12:11.528406  392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.crt: {Name:mk577337d3eb3baea291abf0fe19ba51fb96fe3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:11.528602  392700 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.key ...
	I1213 09:12:11.528624  392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.key: {Name:mk4db650447281a90b0762e0e393b5e90309227a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:11.528734  392700 certs.go:257] generating profile certs ...
	I1213 09:12:11.528815  392700 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.key
	I1213 09:12:11.528845  392700 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt with IP's: []
	I1213 09:12:11.596658  392700 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt ...
	I1213 09:12:11.596693  392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: {Name:mk607a1e5ee3c49e27b769dcb5a9e59fce4a91c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:11.596882  392700 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.key ...
	I1213 09:12:11.596904  392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.key: {Name:mk9bc892cca52ec705cdf46536ac1a653ead1c4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:11.597019  392700 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.key.69166ee8
	I1213 09:12:11.597047  392700 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.crt.69166ee8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185]
	I1213 09:12:11.636467  392700 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.crt.69166ee8 ...
	I1213 09:12:11.636501  392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.crt.69166ee8: {Name:mk1b598603a8e21a8e6cc7ab13eaebd38083b673 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:11.636698  392700 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.key.69166ee8 ...
	I1213 09:12:11.636718  392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.key.69166ee8: {Name:mkc041ba359e7131e4f5ee39710ad799a6e00ad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:11.636827  392700 certs.go:382] copying /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.crt.69166ee8 -> /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.crt
	I1213 09:12:11.636948  392700 certs.go:386] copying /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.key.69166ee8 -> /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.key
	I1213 09:12:11.637043  392700 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/proxy-client.key
	I1213 09:12:11.637072  392700 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/proxy-client.crt with IP's: []
	I1213 09:12:11.763081  392700 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/proxy-client.crt ...
	I1213 09:12:11.763114  392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/proxy-client.crt: {Name:mk7584356f17525f94e9019268d0e8eafe4d8ec4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:11.763316  392700 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/proxy-client.key ...
	I1213 09:12:11.763348  392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/proxy-client.key: {Name:mke59c7288708e4ec1ea6621d04c16802aa70d96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:11.763562  392700 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:12:11.763608  392700 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem (1078 bytes)
	I1213 09:12:11.763631  392700 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:12:11.763652  392700 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/key.pem (1675 bytes)
	I1213 09:12:11.764312  392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:12:11.795709  392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:12:11.825153  392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:12:11.854072  392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 09:12:11.883184  392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 09:12:11.912490  392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 09:12:11.941768  392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:12:11.972297  392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 09:12:12.002013  392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:12:12.031784  392700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:12:12.052504  392700 ssh_runner.go:195] Run: openssl version
	I1213 09:12:12.059406  392700 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:12:12.074753  392700 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:12:12.093351  392700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:12:12.098799  392700 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:12:12.098882  392700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:12:12.109107  392700 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:12:12.122206  392700 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 09:12:12.134795  392700 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:12:12.142309  392700 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 09:12:12.142410  392700 kubeadm.go:401] StartCluster: {Name:addons-246361 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-246361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:12:12.142514  392700 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 09:12:12.142587  392700 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:12:12.179166  392700 cri.go:89] found id: ""
	I1213 09:12:12.179251  392700 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:12:12.191347  392700 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 09:12:12.203307  392700 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:12:12.214947  392700 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:12:12.214973  392700 kubeadm.go:158] found existing configuration files:
	
	I1213 09:12:12.215030  392700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 09:12:12.225728  392700 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:12:12.225801  392700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:12:12.237334  392700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 09:12:12.247869  392700 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:12:12.247932  392700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:12:12.261137  392700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 09:12:12.272479  392700 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:12:12.272550  392700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:12:12.284071  392700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 09:12:12.294641  392700 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:12:12.294702  392700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:12:12.306441  392700 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1213 09:12:12.449873  392700 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 09:12:24.708477  392700 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 09:12:24.708588  392700 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 09:12:24.708723  392700 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 09:12:24.708877  392700 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 09:12:24.709023  392700 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 09:12:24.709116  392700 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 09:12:24.710915  392700 out.go:252]   - Generating certificates and keys ...
	I1213 09:12:24.711018  392700 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 09:12:24.711113  392700 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 09:12:24.711210  392700 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 09:12:24.711297  392700 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 09:12:24.711373  392700 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 09:12:24.711438  392700 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 09:12:24.711507  392700 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 09:12:24.711697  392700 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-246361 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	I1213 09:12:24.711823  392700 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 09:12:24.712014  392700 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-246361 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	I1213 09:12:24.712116  392700 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 09:12:24.712222  392700 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 09:12:24.712306  392700 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 09:12:24.712415  392700 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 09:12:24.712493  392700 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 09:12:24.712573  392700 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 09:12:24.712645  392700 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 09:12:24.712732  392700 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 09:12:24.712803  392700 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 09:12:24.712870  392700 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 09:12:24.712970  392700 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 09:12:24.714311  392700 out.go:252]   - Booting up control plane ...
	I1213 09:12:24.714439  392700 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 09:12:24.714507  392700 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 09:12:24.714561  392700 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 09:12:24.714670  392700 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 09:12:24.714806  392700 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 09:12:24.714912  392700 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 09:12:24.715003  392700 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 09:12:24.715035  392700 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 09:12:24.715156  392700 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 09:12:24.715240  392700 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 09:12:24.715300  392700 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001842246s
	I1213 09:12:24.715397  392700 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 09:12:24.715482  392700 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.185:8443/livez
	I1213 09:12:24.715580  392700 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 09:12:24.715648  392700 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 09:12:24.715745  392700 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.608913029s
	I1213 09:12:24.715808  392700 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.935427412s
	I1213 09:12:24.715893  392700 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001770255s
	I1213 09:12:24.716042  392700 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 09:12:24.716168  392700 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 09:12:24.716224  392700 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 09:12:24.716405  392700 kubeadm.go:319] [mark-control-plane] Marking the node addons-246361 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 09:12:24.716471  392700 kubeadm.go:319] [bootstrap-token] Using token: xb92sz.u2mw76x31y0nlqob
	I1213 09:12:24.718757  392700 out.go:252]   - Configuring RBAC rules ...
	I1213 09:12:24.718870  392700 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 09:12:24.718967  392700 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 09:12:24.719118  392700 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 09:12:24.719319  392700 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 09:12:24.719451  392700 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 09:12:24.719535  392700 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 09:12:24.719639  392700 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 09:12:24.719677  392700 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 09:12:24.719715  392700 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 09:12:24.719720  392700 kubeadm.go:319] 
	I1213 09:12:24.719785  392700 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 09:12:24.719791  392700 kubeadm.go:319] 
	I1213 09:12:24.719851  392700 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 09:12:24.719857  392700 kubeadm.go:319] 
	I1213 09:12:24.719876  392700 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 09:12:24.719931  392700 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 09:12:24.719971  392700 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 09:12:24.719977  392700 kubeadm.go:319] 
	I1213 09:12:24.720031  392700 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 09:12:24.720041  392700 kubeadm.go:319] 
	I1213 09:12:24.720078  392700 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 09:12:24.720087  392700 kubeadm.go:319] 
	I1213 09:12:24.720131  392700 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 09:12:24.720190  392700 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 09:12:24.720245  392700 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 09:12:24.720251  392700 kubeadm.go:319] 
	I1213 09:12:24.720333  392700 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 09:12:24.720395  392700 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 09:12:24.720400  392700 kubeadm.go:319] 
	I1213 09:12:24.720469  392700 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xb92sz.u2mw76x31y0nlqob \
	I1213 09:12:24.720562  392700 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8bcd1fb3d3850626282ed6c823b047645feff2758552312516eb7c1e818bc63a \
	I1213 09:12:24.720601  392700 kubeadm.go:319] 	--control-plane 
	I1213 09:12:24.720607  392700 kubeadm.go:319] 
	I1213 09:12:24.720710  392700 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 09:12:24.720728  392700 kubeadm.go:319] 
	I1213 09:12:24.720816  392700 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xb92sz.u2mw76x31y0nlqob \
	I1213 09:12:24.720954  392700 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8bcd1fb3d3850626282ed6c823b047645feff2758552312516eb7c1e818bc63a 
	I1213 09:12:24.720973  392700 cni.go:84] Creating CNI manager for ""
	I1213 09:12:24.720987  392700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 09:12:24.722564  392700 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 09:12:24.723835  392700 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 09:12:24.741347  392700 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 09:12:24.767123  392700 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 09:12:24.767276  392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:12:24.767293  392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-246361 minikube.k8s.io/updated_at=2025_12_13T09_12_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b minikube.k8s.io/name=addons-246361 minikube.k8s.io/primary=true
	I1213 09:12:24.931941  392700 ops.go:34] apiserver oom_adj: -16
	I1213 09:12:24.932072  392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:12:25.433103  392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:12:25.932874  392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:12:26.432540  392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:12:26.932611  392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:12:27.432403  392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:12:27.932915  392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:12:28.432920  392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:12:28.932540  392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:12:29.432404  392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:12:29.537697  392700 kubeadm.go:1114] duration metric: took 4.770512396s to wait for elevateKubeSystemPrivileges
	I1213 09:12:29.537766  392700 kubeadm.go:403] duration metric: took 17.395370255s to StartCluster
	I1213 09:12:29.537794  392700 settings.go:142] acquiring lock: {Name:mk59569246b81cd6fde64cc849a423eeb59f3563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:29.537948  392700 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 09:12:29.538369  392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/kubeconfig: {Name:mkc4c188214419e87992ca29ee1229c54fdde2b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:29.538694  392700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 09:12:29.538720  392700 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 09:12:29.538851  392700 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1213 09:12:29.538973  392700 config.go:182] Loaded profile config "addons-246361": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:12:29.539020  392700 addons.go:70] Setting yakd=true in profile "addons-246361"
	I1213 09:12:29.539039  392700 addons.go:70] Setting cloud-spanner=true in profile "addons-246361"
	I1213 09:12:29.539052  392700 addons.go:239] Setting addon yakd=true in "addons-246361"
	I1213 09:12:29.539053  392700 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-246361"
	I1213 09:12:29.539053  392700 addons.go:70] Setting registry-creds=true in profile "addons-246361"
	I1213 09:12:29.539065  392700 addons.go:70] Setting gcp-auth=true in profile "addons-246361"
	I1213 09:12:29.539074  392700 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-246361"
	I1213 09:12:29.539088  392700 mustload.go:66] Loading cluster: addons-246361
	I1213 09:12:29.539090  392700 host.go:66] Checking if "addons-246361" exists ...
	I1213 09:12:29.539088  392700 addons.go:239] Setting addon registry-creds=true in "addons-246361"
	I1213 09:12:29.539080  392700 addons.go:70] Setting default-storageclass=true in profile "addons-246361"
	I1213 09:12:29.539126  392700 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-246361"
	I1213 09:12:29.539161  392700 host.go:66] Checking if "addons-246361" exists ...
	I1213 09:12:29.539283  392700 config.go:182] Loaded profile config "addons-246361": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:12:29.539409  392700 addons.go:70] Setting storage-provisioner=true in profile "addons-246361"
	I1213 09:12:29.539451  392700 addons.go:239] Setting addon storage-provisioner=true in "addons-246361"
	I1213 09:12:29.539638  392700 host.go:66] Checking if "addons-246361" exists ...
	I1213 09:12:29.539319  392700 addons.go:70] Setting ingress=true in profile "addons-246361"
	I1213 09:12:29.539940  392700 addons.go:239] Setting addon ingress=true in "addons-246361"
	I1213 09:12:29.539998  392700 host.go:66] Checking if "addons-246361" exists ...
	I1213 09:12:29.539057  392700 addons.go:239] Setting addon cloud-spanner=true in "addons-246361"
	I1213 09:12:29.540069  392700 host.go:66] Checking if "addons-246361" exists ...
	I1213 09:12:29.539735  392700 addons.go:70] Setting volcano=true in profile "addons-246361"
	I1213 09:12:29.540722  392700 addons.go:239] Setting addon volcano=true in "addons-246361"
	I1213 09:12:29.540758  392700 host.go:66] Checking if "addons-246361" exists ...
	I1213 09:12:29.539749  392700 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-246361"
	I1213 09:12:29.540929  392700 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-246361"
	I1213 09:12:29.540958  392700 host.go:66] Checking if "addons-246361" exists ...
	I1213 09:12:29.539763  392700 addons.go:70] Setting metrics-server=true in profile "addons-246361"
	I1213 09:12:29.540994  392700 addons.go:239] Setting addon metrics-server=true in "addons-246361"
	I1213 09:12:29.541022  392700 host.go:66] Checking if "addons-246361" exists ...
	I1213 09:12:29.539772  392700 addons.go:70] Setting inspektor-gadget=true in profile "addons-246361"
	I1213 09:12:29.541116  392700 addons.go:239] Setting addon inspektor-gadget=true in "addons-246361"
	I1213 09:12:29.541142  392700 host.go:66] Checking if "addons-246361" exists ...
	I1213 09:12:29.541399  392700 out.go:179] * Verifying Kubernetes components...
	I1213 09:12:29.539795  392700 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-246361"
	I1213 09:12:29.539807  392700 addons.go:70] Setting registry=true in profile "addons-246361"
	I1213 09:12:29.539818  392700 addons.go:70] Setting volumesnapshots=true in profile "addons-246361"
	I1213 09:12:29.539023  392700 addons.go:70] Setting ingress-dns=true in profile "addons-246361"
	I1213 09:12:29.539032  392700 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-246361"
	I1213 09:12:29.541480  392700 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-246361"
	I1213 09:12:29.541538  392700 addons.go:239] Setting addon registry=true in "addons-246361"
	I1213 09:12:29.541561  392700 host.go:66] Checking if "addons-246361" exists ...
	I1213 09:12:29.541959  392700 addons.go:239] Setting addon volumesnapshots=true in "addons-246361"
	I1213 09:12:29.542011  392700 host.go:66] Checking if "addons-246361" exists ...
	I1213 09:12:29.541504  392700 host.go:66] Checking if "addons-246361" exists ...
	I1213 09:12:29.542199  392700 addons.go:239] Setting addon ingress-dns=true in "addons-246361"
	I1213 09:12:29.542239  392700 host.go:66] Checking if "addons-246361" exists ...
	I1213 09:12:29.541515  392700 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-246361"
	I1213 09:12:29.542403  392700 host.go:66] Checking if "addons-246361" exists ...
	I1213 09:12:29.543020  392700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:12:29.545455  392700 host.go:66] Checking if "addons-246361" exists ...
	I1213 09:12:29.547189  392700 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-246361"
	I1213 09:12:29.547256  392700 host.go:66] Checking if "addons-246361" exists ...
	I1213 09:12:29.547189  392700 addons.go:239] Setting addon default-storageclass=true in "addons-246361"
	I1213 09:12:29.547348  392700 host.go:66] Checking if "addons-246361" exists ...
	I1213 09:12:29.548099  392700 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1213 09:12:29.548167  392700 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1213 09:12:29.549023  392700 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1213 09:12:29.549119  392700 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1213 09:12:29.549911  392700 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1213 09:12:29.550037  392700 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 09:12:29.550054  392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1213 09:12:29.549929  392700 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1213 09:12:29.549938  392700 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1213 09:12:29.550289  392700 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1213 09:12:29.550926  392700 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1213 09:12:29.550965  392700 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1213 09:12:29.551001  392700 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:12:29.551369  392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 09:12:29.551814  392700 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1213 09:12:29.551827  392700 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1213 09:12:29.551838  392700 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1213 09:12:29.551869  392700 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1213 09:12:29.551888  392700 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1213 09:12:29.553051  392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1213 09:12:29.552361  392700 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 09:12:29.553131  392700 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 09:12:29.553575  392700 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1213 09:12:29.553619  392700 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 09:12:29.554058  392700 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 09:12:29.553642  392700 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1213 09:12:29.554358  392700 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1213 09:12:29.554376  392700 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1213 09:12:29.554424  392700 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1213 09:12:29.554932  392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1213 09:12:29.554430  392700 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 09:12:29.555031  392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1213 09:12:29.554444  392700 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 09:12:29.555162  392700 out.go:179]   - Using image docker.io/busybox:stable
	I1213 09:12:29.555158  392700 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1213 09:12:29.555223  392700 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 09:12:29.555808  392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1213 09:12:29.555920  392700 out.go:179]   - Using image docker.io/registry:3.0.0
	I1213 09:12:29.555961  392700 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 09:12:29.556353  392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1213 09:12:29.556808  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.557967  392700 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1213 09:12:29.558366  392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1213 09:12:29.558364  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:29.558400  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.557973  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.558609  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.558663  392700 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 09:12:29.558666  392700 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1213 09:12:29.559185  392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
	I1213 09:12:29.559378  392700 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1213 09:12:29.560365  392700 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 09:12:29.560383  392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1213 09:12:29.560421  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:29.560451  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.560459  392700 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 09:12:29.560476  392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1213 09:12:29.560521  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:29.560714  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.561263  392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
	I1213 09:12:29.561439  392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
	I1213 09:12:29.562280  392700 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1213 09:12:29.563051  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.563304  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.564057  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:29.564094  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.564398  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.564654  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:29.564685  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.564708  392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
	I1213 09:12:29.564850  392700 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1213 09:12:29.565074  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.565342  392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
	I1213 09:12:29.565808  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:29.565840  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.566182  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.566403  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.566838  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:29.566872  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.567019  392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
	I1213 09:12:29.567145  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.567513  392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
	I1213 09:12:29.567537  392700 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1213 09:12:29.567838  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:29.567942  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:29.567988  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.568044  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.568149  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.568347  392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
	I1213 09:12:29.568475  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:29.568501  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.568666  392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
	I1213 09:12:29.569181  392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
	I1213 09:12:29.569231  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.569557  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:29.569589  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.569814  392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
	I1213 09:12:29.570087  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:29.570116  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.570231  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.570294  392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
	I1213 09:12:29.570387  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.570745  392700 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1213 09:12:29.570846  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:29.570878  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:29.570914  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.570883  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.571130  392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
	I1213 09:12:29.571434  392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
	I1213 09:12:29.573751  392700 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1213 09:12:29.574842  392700 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1213 09:12:29.574859  392700 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1213 09:12:29.577229  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.577563  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:29.577584  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:29.577719  392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
	W1213 09:12:29.933537  392700 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50640->192.168.39.185:22: read: connection reset by peer
	I1213 09:12:29.933586  392700 retry.go:31] will retry after 197.844594ms: ssh: handshake failed: read tcp 192.168.39.1:50640->192.168.39.185:22: read: connection reset by peer
	I1213 09:12:30.338791  392700 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 09:12:30.338822  392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1213 09:12:30.499501  392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1213 09:12:30.510589  392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:12:30.523971  392700 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1213 09:12:30.524015  392700 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1213 09:12:30.526184  392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 09:12:30.549254  392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 09:12:30.593131  392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 09:12:30.594698  392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1213 09:12:30.599182  392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 09:12:30.612439  392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 09:12:30.625079  392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 09:12:30.633054  392700 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.094311028s)
	I1213 09:12:30.633075  392700 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.0900139s)
	I1213 09:12:30.633168  392700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:12:30.633277  392700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 09:12:30.675055  392700 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 09:12:30.675092  392700 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 09:12:30.680424  392700 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1213 09:12:30.680449  392700 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1213 09:12:30.730911  392700 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1213 09:12:30.730935  392700 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1213 09:12:30.760367  392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 09:12:30.838715  392700 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1213 09:12:30.838743  392700 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1213 09:12:31.109144  392700 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1213 09:12:31.109165  392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1213 09:12:31.265733  392700 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 09:12:31.265775  392700 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 09:12:31.290008  392700 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1213 09:12:31.290077  392700 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1213 09:12:31.367762  392700 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1213 09:12:31.367795  392700 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1213 09:12:31.473065  392700 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1213 09:12:31.473101  392700 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1213 09:12:31.503287  392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1213 09:12:31.646622  392700 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1213 09:12:31.646654  392700 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1213 09:12:31.646715  392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 09:12:31.702602  392700 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1213 09:12:31.702635  392700 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1213 09:12:31.818406  392700 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1213 09:12:31.818437  392700 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1213 09:12:32.109802  392700 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1213 09:12:32.109829  392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1213 09:12:32.113835  392700 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1213 09:12:32.113860  392700 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1213 09:12:32.244077  392700 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1213 09:12:32.244112  392700 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1213 09:12:32.441387  392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1213 09:12:32.450657  392700 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 09:12:32.450682  392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1213 09:12:32.619691  392700 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1213 09:12:32.619729  392700 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1213 09:12:32.798421  392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 09:12:33.028942  392700 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1213 09:12:33.028982  392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1213 09:12:33.610564  392700 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1213 09:12:33.610597  392700 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1213 09:12:33.890055  392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.390506314s)
	I1213 09:12:33.929974  392700 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1213 09:12:33.930001  392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1213 09:12:34.139582  392700 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1213 09:12:34.139623  392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1213 09:12:34.536656  392700 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 09:12:34.536694  392700 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1213 09:12:34.747808  392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 09:12:35.084304  392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.573673261s)
	I1213 09:12:35.084422  392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.558198971s)
	I1213 09:12:36.729713  392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.180408205s)
	I1213 09:12:37.009362  392700 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1213 09:12:37.012488  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:37.012968  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:37.013014  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:37.013180  392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
	I1213 09:12:37.245752  392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (6.651008917s)
	I1213 09:12:37.245872  392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.652715161s)
	I1213 09:12:37.246016  392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.646797926s)
	I1213 09:12:37.246073  392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.633594881s)
	I1213 09:12:37.317966  392700 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1213 09:12:37.470559  392700 addons.go:239] Setting addon gcp-auth=true in "addons-246361"
	I1213 09:12:37.470632  392700 host.go:66] Checking if "addons-246361" exists ...
	I1213 09:12:37.472656  392700 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1213 09:12:37.474964  392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:37.475365  392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
	I1213 09:12:37.475391  392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
	I1213 09:12:37.475549  392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
	I1213 09:12:38.124537  392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.499407472s)
	I1213 09:12:38.124591  392700 addons.go:495] Verifying addon ingress=true in "addons-246361"
	I1213 09:12:38.124588  392700 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.49127635s)
	I1213 09:12:38.124618  392700 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.491422156s)
	I1213 09:12:38.124681  392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.364280896s)
	I1213 09:12:38.124622  392700 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1213 09:12:38.124750  392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.621438006s)
	I1213 09:12:38.124767  392700 addons.go:495] Verifying addon registry=true in "addons-246361"
	I1213 09:12:38.125001  392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.478248802s)
	I1213 09:12:38.125046  392700 addons.go:495] Verifying addon metrics-server=true in "addons-246361"
	I1213 09:12:38.125103  392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.683676023s)
	I1213 09:12:38.125682  392700 node_ready.go:35] waiting up to 6m0s for node "addons-246361" to be "Ready" ...
	I1213 09:12:38.126988  392700 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-246361 service yakd-dashboard -n yakd-dashboard
	
	I1213 09:12:38.126998  392700 out.go:179] * Verifying registry addon...
	I1213 09:12:38.127019  392700 out.go:179] * Verifying ingress addon...
	I1213 09:12:38.129134  392700 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1213 09:12:38.129145  392700 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1213 09:12:38.134101  392700 node_ready.go:49] node "addons-246361" is "Ready"
	I1213 09:12:38.134126  392700 node_ready.go:38] duration metric: took 8.324074ms for node "addons-246361" to be "Ready" ...
	I1213 09:12:38.134141  392700 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:12:38.134193  392700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:12:38.149759  392700 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 09:12:38.149781  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:38.150916  392700 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 09:12:38.150943  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:38.658750  392700 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-246361" context rescaled to 1 replicas
	I1213 09:12:38.666607  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:38.670387  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:39.102011  392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.303535198s)
	W1213 09:12:39.102078  392700 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 09:12:39.102111  392700 retry.go:31] will retry after 359.726202ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 09:12:39.155064  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:39.155751  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:39.462728  392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 09:12:39.643966  392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.896074746s)
	I1213 09:12:39.644026  392700 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-246361"
	I1213 09:12:39.644023  392700 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.171325845s)
	I1213 09:12:39.644079  392700 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.509861668s)
	I1213 09:12:39.644115  392700 api_server.go:72] duration metric: took 10.105347669s to wait for apiserver process to appear ...
	I1213 09:12:39.644128  392700 api_server.go:88] waiting for apiserver healthz status ...
	I1213 09:12:39.644244  392700 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I1213 09:12:39.646045  392700 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 09:12:39.646070  392700 out.go:179] * Verifying csi-hostpath-driver addon...
	I1213 09:12:39.648025  392700 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1213 09:12:39.648484  392700 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1213 09:12:39.649153  392700 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1213 09:12:39.649181  392700 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1213 09:12:39.657088  392700 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I1213 09:12:39.661986  392700 api_server.go:141] control plane version: v1.34.2
	I1213 09:12:39.662011  392700 api_server.go:131] duration metric: took 17.78801ms to wait for apiserver health ...
	I1213 09:12:39.662020  392700 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 09:12:39.710789  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:39.710847  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:39.711019  392700 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 09:12:39.711047  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:39.711409  392700 system_pods.go:59] 20 kube-system pods found
	I1213 09:12:39.711452  392700 system_pods.go:61] "amd-gpu-device-plugin-pcr8k" [ae35898b-cac4-4c5d-b1f5-3de19fba17ef] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 09:12:39.711464  392700 system_pods.go:61] "coredns-66bc5c9577-225xg" [f6715b38-1f5c-45f6-ae76-9c279196f39b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:12:39.711473  392700 system_pods.go:61] "coredns-66bc5c9577-x9vlt" [e3722310-4cbe-4697-8045-c8353e07f242] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:12:39.711482  392700 system_pods.go:61] "csi-hostpath-attacher-0" [9d647a4f-c7a0-4cb6-972a-ee1caa579994] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 09:12:39.711486  392700 system_pods.go:61] "csi-hostpath-resizer-0" [0f6f307e-148c-4651-b4f6-3f3f1c171223] Pending
	I1213 09:12:39.711495  392700 system_pods.go:61] "csi-hostpathplugin-lcmz2" [57b68f56-3f72-481e-a5dc-48874663d2b0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 09:12:39.711498  392700 system_pods.go:61] "etcd-addons-246361" [9c6ee6ef-dcf1-4eb6-843f-bbe57ee104d0] Running
	I1213 09:12:39.711502  392700 system_pods.go:61] "kube-apiserver-addons-246361" [95cfa299-af07-4241-99e6-f974e0615596] Running
	I1213 09:12:39.711506  392700 system_pods.go:61] "kube-controller-manager-addons-246361" [598b762c-7498-4f89-8bad-8c38caaf259f] Running
	I1213 09:12:39.711526  392700 system_pods.go:61] "kube-ingress-dns-minikube" [3d548ec6-ac97-4b00-a992-cf50e0728d3c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 09:12:39.711535  392700 system_pods.go:61] "kube-proxy-f6vpr" [b60db149-95ea-4d92-88d4-958521a5cf75] Running
	I1213 09:12:39.711541  392700 system_pods.go:61] "kube-scheduler-addons-246361" [6768514c-186b-4e66-bb4d-e4e91b025fb2] Running
	I1213 09:12:39.711549  392700 system_pods.go:61] "metrics-server-85b7d694d7-pglv5" [ce676a7b-70bb-4524-b292-8a00796b0425] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 09:12:39.711558  392700 system_pods.go:61] "nvidia-device-plugin-daemonset-ghprj" [64bd87e7-7e06-4465-abb1-e27282853105] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 09:12:39.711570  392700 system_pods.go:61] "registry-6b586f9694-4vn9j" [0ffa6230-ba82-4c5a-bfd3-a4c73acdce35] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 09:12:39.711585  392700 system_pods.go:61] "registry-creds-764b6fb674-9h8mr" [9b8507b1-f028-4a81-8e59-5773d4e71038] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 09:12:39.711590  392700 system_pods.go:61] "registry-proxy-q8xvn" [6c738182-6c24-4d8e-acc8-25d9eae8cfbd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 09:12:39.711596  392700 system_pods.go:61] "snapshot-controller-7d9fbc56b8-g7rgv" [bebdd078-f41c-4293-a21f-61f2269782c8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 09:12:39.711602  392700 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xps7r" [07924ab0-91ea-41fa-bf06-3b4cc735fdae] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 09:12:39.711607  392700 system_pods.go:61] "storage-provisioner" [9b05e28d-a4a6-4e90-af0c-bf01fd93b1e1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:12:39.711615  392700 system_pods.go:74] duration metric: took 49.575503ms to wait for pod list to return data ...
	I1213 09:12:39.711630  392700 default_sa.go:34] waiting for default service account to be created ...
	I1213 09:12:39.724425  392700 default_sa.go:45] found service account: "default"
	I1213 09:12:39.724455  392700 default_sa.go:55] duration metric: took 12.816866ms for default service account to be created ...
	I1213 09:12:39.724464  392700 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 09:12:39.741228  392700 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1213 09:12:39.741253  392700 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1213 09:12:39.770994  392700 system_pods.go:86] 20 kube-system pods found
	I1213 09:12:39.771032  392700 system_pods.go:89] "amd-gpu-device-plugin-pcr8k" [ae35898b-cac4-4c5d-b1f5-3de19fba17ef] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 09:12:39.771047  392700 system_pods.go:89] "coredns-66bc5c9577-225xg" [f6715b38-1f5c-45f6-ae76-9c279196f39b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:12:39.771060  392700 system_pods.go:89] "coredns-66bc5c9577-x9vlt" [e3722310-4cbe-4697-8045-c8353e07f242] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:12:39.771067  392700 system_pods.go:89] "csi-hostpath-attacher-0" [9d647a4f-c7a0-4cb6-972a-ee1caa579994] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 09:12:39.771075  392700 system_pods.go:89] "csi-hostpath-resizer-0" [0f6f307e-148c-4651-b4f6-3f3f1c171223] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 09:12:39.771084  392700 system_pods.go:89] "csi-hostpathplugin-lcmz2" [57b68f56-3f72-481e-a5dc-48874663d2b0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 09:12:39.771091  392700 system_pods.go:89] "etcd-addons-246361" [9c6ee6ef-dcf1-4eb6-843f-bbe57ee104d0] Running
	I1213 09:12:39.771098  392700 system_pods.go:89] "kube-apiserver-addons-246361" [95cfa299-af07-4241-99e6-f974e0615596] Running
	I1213 09:12:39.771106  392700 system_pods.go:89] "kube-controller-manager-addons-246361" [598b762c-7498-4f89-8bad-8c38caaf259f] Running
	I1213 09:12:39.771111  392700 system_pods.go:89] "kube-ingress-dns-minikube" [3d548ec6-ac97-4b00-a992-cf50e0728d3c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 09:12:39.771115  392700 system_pods.go:89] "kube-proxy-f6vpr" [b60db149-95ea-4d92-88d4-958521a5cf75] Running
	I1213 09:12:39.771119  392700 system_pods.go:89] "kube-scheduler-addons-246361" [6768514c-186b-4e66-bb4d-e4e91b025fb2] Running
	I1213 09:12:39.771123  392700 system_pods.go:89] "metrics-server-85b7d694d7-pglv5" [ce676a7b-70bb-4524-b292-8a00796b0425] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 09:12:39.771129  392700 system_pods.go:89] "nvidia-device-plugin-daemonset-ghprj" [64bd87e7-7e06-4465-abb1-e27282853105] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 09:12:39.771143  392700 system_pods.go:89] "registry-6b586f9694-4vn9j" [0ffa6230-ba82-4c5a-bfd3-a4c73acdce35] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 09:12:39.771153  392700 system_pods.go:89] "registry-creds-764b6fb674-9h8mr" [9b8507b1-f028-4a81-8e59-5773d4e71038] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 09:12:39.771161  392700 system_pods.go:89] "registry-proxy-q8xvn" [6c738182-6c24-4d8e-acc8-25d9eae8cfbd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 09:12:39.771169  392700 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g7rgv" [bebdd078-f41c-4293-a21f-61f2269782c8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 09:12:39.771186  392700 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xps7r" [07924ab0-91ea-41fa-bf06-3b4cc735fdae] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 09:12:39.771196  392700 system_pods.go:89] "storage-provisioner" [9b05e28d-a4a6-4e90-af0c-bf01fd93b1e1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:12:39.771205  392700 system_pods.go:126] duration metric: took 46.73405ms to wait for k8s-apps to be running ...
	I1213 09:12:39.771215  392700 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 09:12:39.771271  392700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:12:39.824008  392700 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 09:12:39.824035  392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1213 09:12:39.938872  392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 09:12:40.139210  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:40.139318  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:40.160406  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:40.637579  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:40.639764  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:40.653405  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:41.140286  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:41.142006  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:41.154100  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:41.659408  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:41.666977  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:41.706882  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:41.707590  392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.244812289s)
	I1213 09:12:41.707622  392700 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.936324258s)
	I1213 09:12:41.707654  392700 system_svc.go:56] duration metric: took 1.93643377s WaitForService to wait for kubelet
	I1213 09:12:41.707670  392700 kubeadm.go:587] duration metric: took 12.168901765s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:12:41.707700  392700 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:12:41.707704  392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.768793764s)
	I1213 09:12:41.708822  392700 addons.go:495] Verifying addon gcp-auth=true in "addons-246361"
	I1213 09:12:41.711170  392700 out.go:179] * Verifying gcp-auth addon...
	I1213 09:12:41.713116  392700 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1213 09:12:41.732539  392700 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 09:12:41.732608  392700 node_conditions.go:123] node cpu capacity is 2
	I1213 09:12:41.732637  392700 node_conditions.go:105] duration metric: took 24.930191ms to run NodePressure ...
	I1213 09:12:41.732656  392700 start.go:242] waiting for startup goroutines ...
	I1213 09:12:41.747891  392700 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1213 09:12:41.747929  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:42.138893  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:42.141690  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:42.157929  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:42.219339  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:42.637038  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:42.637103  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:42.659295  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:42.735695  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:43.137422  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:43.137589  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:43.152264  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:43.219460  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:43.635247  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:43.636314  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:43.655595  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:43.719173  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:44.133309  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:44.136803  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:44.153247  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:44.219809  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:44.638851  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:44.640014  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:44.652718  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:44.725183  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:45.133593  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:45.136464  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:45.155694  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:45.216820  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:45.633873  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:45.633978  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:45.652866  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:45.716526  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:46.137389  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:46.137441  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:46.152335  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:46.237426  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:46.634114  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:46.634108  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:46.652970  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:46.717097  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:47.135279  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:47.136853  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:47.152605  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:47.218060  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:47.633152  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:47.633159  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:47.651956  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:47.719945  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:48.133994  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:48.134541  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:48.154724  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:48.217360  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:48.637246  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:48.637488  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:48.653939  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:48.718287  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:49.136026  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:49.138572  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:49.154830  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:49.222042  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:49.633871  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:49.634758  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:49.652591  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:49.717441  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:50.133119  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:50.134136  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:50.153957  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:50.217526  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:50.637032  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:50.638292  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:50.652212  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:50.719546  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:51.132697  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:51.134911  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:51.153412  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:51.216230  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:51.634427  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:51.634453  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:51.653045  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:51.717637  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:52.132518  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:52.133301  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:52.153671  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:52.216992  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:52.633949  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:52.634621  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:52.653764  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:52.716829  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:53.134230  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:53.134736  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:53.152090  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:53.234461  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:53.638367  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:53.639934  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:53.652859  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:53.717349  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:54.134998  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:54.138551  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:54.154961  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:54.218702  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:54.636288  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:54.637624  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:54.656413  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:54.717989  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:55.144978  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:55.145144  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:55.152803  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:55.222062  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:55.634895  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:55.638084  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:55.654400  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:55.719756  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:56.135967  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:56.136036  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:56.154005  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:56.216806  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:56.637027  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:56.638197  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:56.652922  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:56.720544  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:57.135676  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:57.135998  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:57.153314  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:57.217350  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:57.636027  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:57.636150  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:57.654346  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:57.719682  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:58.134179  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:58.134488  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:58.153218  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:58.218931  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:58.633676  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:58.646837  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:58.659524  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:58.846774  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:59.133773  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:59.138462  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:59.154817  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:59.217360  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:12:59.635172  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:12:59.635452  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:12:59.653150  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:12:59.719394  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:00.140043  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:00.140077  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:13:00.155240  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:00.217238  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:00.632844  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:13:00.634028  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:00.652554  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:00.719012  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:01.135956  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:01.136170  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:13:01.153866  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:01.218388  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:01.633988  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:01.635569  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:13:01.653593  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:01.717088  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:02.133891  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:13:02.134883  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:02.152416  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:02.216310  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:02.633309  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:13:02.633358  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:02.653614  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:02.907546  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:03.138083  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:13:03.138389  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:03.155252  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:03.219188  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:03.636724  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:13:03.636927  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:03.653863  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:03.718355  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:04.133704  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:13:04.133893  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:04.156017  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:04.219716  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:04.634411  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:04.634568  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:13:04.651913  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:04.717220  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:05.133743  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:13:05.133930  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:05.151990  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:05.218427  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:05.635037  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:13:05.635881  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:05.651616  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:05.716683  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:06.139749  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:13:06.139902  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:06.154731  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:06.217561  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:06.636020  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:13:06.636930  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:06.653041  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:06.718913  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:07.135136  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:13:07.135956  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:07.153942  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:07.217659  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:07.633809  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:07.634040  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:13:07.652145  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:07.717939  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:08.133246  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 09:13:08.133443  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:08.154595  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:08.216656  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:08.642469  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:08.643951  392700 kapi.go:107] duration metric: took 30.514802697s to wait for kubernetes.io/minikube-addons=registry ...
	I1213 09:13:08.655024  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:08.717766  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:09.136867  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:09.152424  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:09.217371  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:09.635119  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:09.654132  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:09.722734  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:10.192962  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:10.194531  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:10.218185  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:10.636725  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:10.736670  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:10.736900  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:11.135314  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:11.153237  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:11.217281  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:11.636555  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:11.654253  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:11.726653  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:12.135121  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:12.153899  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:12.220374  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:12.634901  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:12.653249  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:12.736313  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:13.138621  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:13.154892  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:13.218461  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:13.633945  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:13.653863  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:13.717531  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:14.134430  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:14.153370  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:14.220987  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:14.635625  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:14.653675  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:14.717065  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:15.290476  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:15.294725  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:15.295288  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:15.633097  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:15.652692  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:15.719082  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:16.133941  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:16.151867  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:16.217258  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:16.633243  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:16.652470  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:16.716521  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:17.133005  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:17.153891  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:17.219219  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:17.634474  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:17.653145  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:17.719636  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:18.135146  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:18.154858  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:18.219408  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:18.634755  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:18.654147  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:18.719755  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:19.135426  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:19.153964  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:19.235466  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:19.634565  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:19.653337  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:19.717349  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:20.134431  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:20.152713  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:20.216907  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:20.632379  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:20.651987  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:20.717161  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:21.135375  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:21.151976  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:21.219921  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:21.638965  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:21.655875  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:21.720726  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:22.134920  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:22.155629  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:22.221571  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:22.635492  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:22.656050  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:22.717301  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:23.133519  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:23.152620  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:23.220694  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:23.634660  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:23.651788  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:23.717709  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:24.134370  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:24.154872  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:24.217411  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:24.637897  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:24.655242  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:24.737385  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:25.139561  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:25.152697  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:25.218462  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:25.633396  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:25.652864  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:25.719692  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:26.134257  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:26.154878  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:26.219128  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:26.635889  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:26.652911  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:26.718787  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:27.134254  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:27.152526  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:27.219045  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:27.824440  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:27.824673  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:27.824764  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:28.134337  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:28.152509  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:28.217241  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:28.634935  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:28.652593  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:28.717918  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:29.133064  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:29.153752  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:29.219470  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:29.635380  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:29.654431  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:29.735214  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:30.138785  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:30.157605  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:30.217959  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:30.635195  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:30.654836  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:30.717703  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:31.136745  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:31.236089  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:31.236139  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:31.636436  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:31.653803  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:31.717224  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:32.134632  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:32.235101  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:32.235344  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:32.645522  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:32.652719  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:32.720679  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:33.137718  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:33.156668  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:33.218572  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:33.634174  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:33.653084  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:33.718037  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:34.133279  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:34.154266  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:34.217997  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:34.638043  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:34.652893  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:34.738037  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:35.137396  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:35.156924  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:35.221444  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:35.633900  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:35.653879  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:35.718527  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:36.135422  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:36.154283  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:36.235962  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:36.636593  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:36.652522  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:36.717972  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:37.133524  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:37.155198  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:37.218089  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:37.632895  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:37.652282  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 09:13:37.717892  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:38.137885  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:38.152009  392700 kapi.go:107] duration metric: took 58.503521481s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1213 09:13:38.217812  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:38.632433  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:38.716310  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:39.133741  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:39.217319  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:39.634385  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:39.717042  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:40.134071  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:40.218670  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:40.633928  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:40.717426  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:41.134250  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:41.218023  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:41.633705  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:41.716431  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:42.133780  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:42.220119  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:42.633483  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:42.717173  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:43.134213  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:43.233819  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:43.637203  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:43.720025  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:44.136090  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:44.221521  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:44.639036  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:44.719948  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:45.135039  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:45.218899  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:45.633976  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:45.721350  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:46.134138  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:46.217998  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:46.637669  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:46.717179  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:47.136544  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:47.219570  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:47.633528  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:47.718883  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:48.161462  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:48.221832  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:48.634511  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:48.721194  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:49.133791  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:49.218406  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:49.789041  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:49.792013  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:50.133134  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:50.234743  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:50.633314  392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 09:13:50.716316  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:51.134218  392700 kapi.go:107] duration metric: took 1m13.005080428s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1213 09:13:51.216872  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:51.716458  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:52.220557  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:52.717232  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:53.218795  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:53.719651  392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 09:13:54.217821  392700 kapi.go:107] duration metric: took 1m12.504699283s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1213 09:13:54.219489  392700 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-246361 cluster.
	I1213 09:13:54.220708  392700 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1213 09:13:54.221841  392700 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1213 09:13:54.223192  392700 out.go:179] * Enabled addons: cloud-spanner, storage-provisioner, amd-gpu-device-plugin, storage-provisioner-rancher, inspektor-gadget, nvidia-device-plugin, ingress-dns, default-storageclass, registry-creds, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1213 09:13:54.224371  392700 addons.go:530] duration metric: took 1m24.68552319s for enable addons: enabled=[cloud-spanner storage-provisioner amd-gpu-device-plugin storage-provisioner-rancher inspektor-gadget nvidia-device-plugin ingress-dns default-storageclass registry-creds metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1213 09:13:54.224419  392700 start.go:247] waiting for cluster config update ...
	I1213 09:13:54.224443  392700 start.go:256] writing updated cluster config ...
	I1213 09:13:54.224792  392700 ssh_runner.go:195] Run: rm -f paused
	I1213 09:13:54.231309  392700 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:13:54.235252  392700 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x9vlt" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:13:54.240283  392700 pod_ready.go:94] pod "coredns-66bc5c9577-x9vlt" is "Ready"
	I1213 09:13:54.240335  392700 pod_ready.go:86] duration metric: took 5.040196ms for pod "coredns-66bc5c9577-x9vlt" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:13:54.243055  392700 pod_ready.go:83] waiting for pod "etcd-addons-246361" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:13:54.248721  392700 pod_ready.go:94] pod "etcd-addons-246361" is "Ready"
	I1213 09:13:54.248748  392700 pod_ready.go:86] duration metric: took 5.663324ms for pod "etcd-addons-246361" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:13:54.251231  392700 pod_ready.go:83] waiting for pod "kube-apiserver-addons-246361" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:13:54.256167  392700 pod_ready.go:94] pod "kube-apiserver-addons-246361" is "Ready"
	I1213 09:13:54.256189  392700 pod_ready.go:86] duration metric: took 4.938995ms for pod "kube-apiserver-addons-246361" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:13:54.258262  392700 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-246361" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:13:54.636246  392700 pod_ready.go:94] pod "kube-controller-manager-addons-246361" is "Ready"
	I1213 09:13:54.636274  392700 pod_ready.go:86] duration metric: took 377.99103ms for pod "kube-controller-manager-addons-246361" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:13:54.836613  392700 pod_ready.go:83] waiting for pod "kube-proxy-f6vpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:13:55.236131  392700 pod_ready.go:94] pod "kube-proxy-f6vpr" is "Ready"
	I1213 09:13:55.236163  392700 pod_ready.go:86] duration metric: took 399.509399ms for pod "kube-proxy-f6vpr" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:13:55.436277  392700 pod_ready.go:83] waiting for pod "kube-scheduler-addons-246361" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:13:55.835225  392700 pod_ready.go:94] pod "kube-scheduler-addons-246361" is "Ready"
	I1213 09:13:55.835252  392700 pod_ready.go:86] duration metric: took 398.944175ms for pod "kube-scheduler-addons-246361" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:13:55.835265  392700 pod_ready.go:40] duration metric: took 1.603895142s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:13:55.881801  392700 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 09:13:55.883802  392700 out.go:179] * Done! kubectl is now configured to use "addons-246361" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.177800435Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d18cb8bc-5a90-4410-8af5-65fbf9828e5c name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.178378299Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:628516a7d73533daff5dd847807b063734b490b3be17f322069b5862cab3bbda,PodSandboxId:7d574a2f34f68882bdbd41bbed987b533c20016e5acd576de42b45b2f324fd59,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765617259603055897,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b69c078-1088-484d-990b-d8794ed9b2c6,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e37ee820ac55b9c1336b4d106799b475cfaa12f0a5d71aa35438310e3ce95399,PodSandboxId:dced63ac053305c7768f9cd746ec2d926ee40d250a6a06227d94c76fd66672f3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765617240445757059,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: decad740-d6c4-4453-a6a3-0a9ac1f58430,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3c9248017da5b177fe1f69d3216863ca995f332ee106f91d1d36bccc73dfe7,PodSandboxId:41bffb9ec70750921a60e8e0f102b77b9dfdb3057eecc6c8a33d6cc78e2021d9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765617229970386771,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-w2qnr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8b2f9c0b-2e02-4126-9d1d-c1f045ca6f6b,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:628eb745c903d7a888d0fb1d5f9b057d7d5ada312e38a16b6699fd6395681a02,PodSandboxId:2d56fcdd6385824c310e7be7c766a9e95ca4da7b4a8575f4a4d455d21f2e803f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765617199761909099,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rtxd5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 153bc1ac-d8e9-4540-b55a-2728ae1974e8,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c912d2dff111e0461477800bb1445315bf3d43661c24a9aa6e2279fc3617b0,PodSandboxId:4505d18199645c770afd301a1ee3881a4007ba99590175cae7dd91ea1410870f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765617199201265517,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6zvn2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f555fef8-9057-4114-af37-9d7365c0bef2,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b4f2a90f2e3491953330a91cd5ade57ff093679a59bf9569f93a7b6ef247b0,PodSandboxId:4f8767a0b981d20b0bc7e5d2b9a4b04b6bb23a1a44e35e6ac916938b5cb1d481,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765617180774283051,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d548ec6-ac97-4b00-a992-cf50e0728d3c,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89eb96caabf1c75e5905d56d63737b533b45d5de24141cdf765492315cbd1765,PodSandboxId:8b2438312f009f66d4a55e56fba7f01549c3ac03ca6c6148d7686479be1bfe4c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765617159210431241,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pcr8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae35898b-cac4-4c5d-b1f5-3de19fba17ef,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4877853c63147ba15265057e2c32a56362db34c6d6bbc67ceff075e7fe08686,PodSandboxId:4dad2a7dc053f19c5407c74f603ff91fc75fcbc6f12138ce3f39a1b46abafd09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765617158864388340,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b05e28d-a4a6-4e90-af0c-bf01fd93b1e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2012f15f4ade88a5c865283e3de2526fc6c1a98918db531fe20e87e5809f3b2,PodSandboxId:f9a5e2370f1b141a45a09fdaca5db063a4936d5ce229ece30d51420e77101827,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765617150825692423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-x9vlt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3722310-4cbe-4697-8045-c8353e07f242,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8082d73b8ece1f67e20c64e2cfab51d335ac46b0b40b55603142da740c91a3,PodSandboxId:ee4e7efa604ef12b36cdd19d812f24148ab40013f8080d40bda6b4383db8b3de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765617150031472376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f6vpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b60db149-95ea-4d92-88d4-958521a5cf75,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c0f467af6def5dd49ebbfbba9a5ba99410764f3415aaf4f3adf2ba77c16191d,PodSandboxId:f74fa673bfef882302672a71d399a5465966cf243a48410013087564e837a849,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765617137916504372,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397c8ea73c97f3674fbdc97e9d7e7383,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538894d57d3ca06d99af82f5f05877513892aa26744c920bec59842908f9af2c,PodSandboxId:47e4cfa9e38fae41f767e03875b44103ab0dd7ac0db7ecf0421933ae7d0242f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765617137886876529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 946988fd2b590065078c2500551ccf5e,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd984a20ab1f80a715a1ade17f930872c415db4424e9b3a206a11cddff88ed81,PodSandboxId:6af2fba32bd98f8091114ab3194cb5e1527b2788f377063378a7ab77dbe8f666,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765617137903829248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb770669e3c9ac4d04b00d62d163fe1c,},Annotations:
map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae132e84c3ae2c02d1dfcf431c4e4d10f6186e4af908262d22d2517a2e18c6b8,PodSandboxId:69ad068c41f7050551ff1f728dffef80fcf60dd8834187e5432815f09eeb554f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765617137874957679,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-246361,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 04942281d89a1eb5c45cc1e401d754fc,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d18cb8bc-5a90-4410-8af5-65fbf9828e5c name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.219394155Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db34723e-5e5b-4842-97b9-18b42576ef9b name=/runtime.v1.RuntimeService/Version
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.219486745Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db34723e-5e5b-4842-97b9-18b42576ef9b name=/runtime.v1.RuntimeService/Version
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.221167257Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0bcd6100-0793-4c80-afa2-fae84284a4ea name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.223364608Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765617404223333899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0bcd6100-0793-4c80-afa2-fae84284a4ea name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.224300248Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=388720cb-c834-48e3-8144-e0da1d6e59b4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.224368126Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=388720cb-c834-48e3-8144-e0da1d6e59b4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.224889244Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:628516a7d73533daff5dd847807b063734b490b3be17f322069b5862cab3bbda,PodSandboxId:7d574a2f34f68882bdbd41bbed987b533c20016e5acd576de42b45b2f324fd59,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765617259603055897,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b69c078-1088-484d-990b-d8794ed9b2c6,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e37ee820ac55b9c1336b4d106799b475cfaa12f0a5d71aa35438310e3ce95399,PodSandboxId:dced63ac053305c7768f9cd746ec2d926ee40d250a6a06227d94c76fd66672f3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765617240445757059,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: decad740-d6c4-4453-a6a3-0a9ac1f58430,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3c9248017da5b177fe1f69d3216863ca995f332ee106f91d1d36bccc73dfe7,PodSandboxId:41bffb9ec70750921a60e8e0f102b77b9dfdb3057eecc6c8a33d6cc78e2021d9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765617229970386771,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-w2qnr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8b2f9c0b-2e02-4126-9d1d-c1f045ca6f6b,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:628eb745c903d7a888d0fb1d5f9b057d7d5ada312e38a16b6699fd6395681a02,PodSandboxId:2d56fcdd6385824c310e7be7c766a9e95ca4da7b4a8575f4a4d455d21f2e803f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765617199761909099,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rtxd5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 153bc1ac-d8e9-4540-b55a-2728ae1974e8,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c912d2dff111e0461477800bb1445315bf3d43661c24a9aa6e2279fc3617b0,PodSandboxId:4505d18199645c770afd301a1ee3881a4007ba99590175cae7dd91ea1410870f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765617199201265517,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6zvn2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f555fef8-9057-4114-af37-9d7365c0bef2,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b4f2a90f2e3491953330a91cd5ade57ff093679a59bf9569f93a7b6ef247b0,PodSandboxId:4f8767a0b981d20b0bc7e5d2b9a4b04b6bb23a1a44e35e6ac916938b5cb1d481,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765617180774283051,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d548ec6-ac97-4b00-a992-cf50e0728d3c,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89eb96caabf1c75e5905d56d63737b533b45d5de24141cdf765492315cbd1765,PodSandboxId:8b2438312f009f66d4a55e56fba7f01549c3ac03ca6c6148d7686479be1bfe4c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765617159210431241,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pcr8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae35898b-cac4-4c5d-b1f5-3de19fba17ef,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4877853c63147ba15265057e2c32a56362db34c6d6bbc67ceff075e7fe08686,PodSandboxId:4dad2a7dc053f19c5407c74f603ff91fc75fcbc6f12138ce3f39a1b46abafd09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765617158864388340,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b05e28d-a4a6-4e90-af0c-bf01fd93b1e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2012f15f4ade88a5c865283e3de2526fc6c1a98918db531fe20e87e5809f3b2,PodSandboxId:f9a5e2370f1b141a45a09fdaca5db063a4936d5ce229ece30d51420e77101827,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765617150825692423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-x9vlt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3722310-4cbe-4697-8045-c8353e07f242,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8082d73b8ece1f67e20c64e2cfab51d335ac46b0b40b55603142da740c91a3,PodSandboxId:ee4e7efa604ef12b36cdd19d812f24148ab40013f8080d40bda6b4383db8b3de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765617150031472376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f6vpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b60db149-95ea-4d92-88d4-958521a5cf75,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c0f467af6def5dd49ebbfbba9a5ba99410764f3415aaf4f3adf2ba77c16191d,PodSandboxId:f74fa673bfef882302672a71d399a5465966cf243a48410013087564e837a849,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765617137916504372,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397c8ea73c97f3674fbdc97e9d7e7383,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538894d57d3ca06d99af82f5f05877513892aa26744c920bec59842908f9af2c,PodSandboxId:47e4cfa9e38fae41f767e03875b44103ab0dd7ac0db7ecf0421933ae7d0242f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765617137886876529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 946988fd2b590065078c2500551ccf5e,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd984a20ab1f80a715a1ade17f930872c415db4424e9b3a206a11cddff88ed81,PodSandboxId:6af2fba32bd98f8091114ab3194cb5e1527b2788f377063378a7ab77dbe8f666,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765617137903829248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb770669e3c9ac4d04b00d62d163fe1c,},Annotations:
map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae132e84c3ae2c02d1dfcf431c4e4d10f6186e4af908262d22d2517a2e18c6b8,PodSandboxId:69ad068c41f7050551ff1f728dffef80fcf60dd8834187e5432815f09eeb554f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765617137874957679,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-246361,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 04942281d89a1eb5c45cc1e401d754fc,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=388720cb-c834-48e3-8144-e0da1d6e59b4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.256502648Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6eab6c19-d33d-43e2-ac53-a421217a1007 name=/runtime.v1.RuntimeService/Version
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.256789277Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6eab6c19-d33d-43e2-ac53-a421217a1007 name=/runtime.v1.RuntimeService/Version
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.258249521Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=672023a3-fbe2-4793-b965-2c789c115278 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.259583657Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765617404259557391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=672023a3-fbe2-4793-b965-2c789c115278 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.260528832Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0e260f3-671f-430f-bc10-72d1155b5fb1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.260601361Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0e260f3-671f-430f-bc10-72d1155b5fb1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.260933642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:628516a7d73533daff5dd847807b063734b490b3be17f322069b5862cab3bbda,PodSandboxId:7d574a2f34f68882bdbd41bbed987b533c20016e5acd576de42b45b2f324fd59,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765617259603055897,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b69c078-1088-484d-990b-d8794ed9b2c6,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e37ee820ac55b9c1336b4d106799b475cfaa12f0a5d71aa35438310e3ce95399,PodSandboxId:dced63ac053305c7768f9cd746ec2d926ee40d250a6a06227d94c76fd66672f3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765617240445757059,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: decad740-d6c4-4453-a6a3-0a9ac1f58430,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3c9248017da5b177fe1f69d3216863ca995f332ee106f91d1d36bccc73dfe7,PodSandboxId:41bffb9ec70750921a60e8e0f102b77b9dfdb3057eecc6c8a33d6cc78e2021d9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765617229970386771,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-w2qnr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8b2f9c0b-2e02-4126-9d1d-c1f045ca6f6b,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:628eb745c903d7a888d0fb1d5f9b057d7d5ada312e38a16b6699fd6395681a02,PodSandboxId:2d56fcdd6385824c310e7be7c766a9e95ca4da7b4a8575f4a4d455d21f2e803f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765617199761909099,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rtxd5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 153bc1ac-d8e9-4540-b55a-2728ae1974e8,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c912d2dff111e0461477800bb1445315bf3d43661c24a9aa6e2279fc3617b0,PodSandboxId:4505d18199645c770afd301a1ee3881a4007ba99590175cae7dd91ea1410870f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765617199201265517,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6zvn2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f555fef8-9057-4114-af37-9d7365c0bef2,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b4f2a90f2e3491953330a91cd5ade57ff093679a59bf9569f93a7b6ef247b0,PodSandboxId:4f8767a0b981d20b0bc7e5d2b9a4b04b6bb23a1a44e35e6ac916938b5cb1d481,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765617180774283051,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d548ec6-ac97-4b00-a992-cf50e0728d3c,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89eb96caabf1c75e5905d56d63737b533b45d5de24141cdf765492315cbd1765,PodSandboxId:8b2438312f009f66d4a55e56fba7f01549c3ac03ca6c6148d7686479be1bfe4c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765617159210431241,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pcr8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae35898b-cac4-4c5d-b1f5-3de19fba17ef,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4877853c63147ba15265057e2c32a56362db34c6d6bbc67ceff075e7fe08686,PodSandboxId:4dad2a7dc053f19c5407c74f603ff91fc75fcbc6f12138ce3f39a1b46abafd09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765617158864388340,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b05e28d-a4a6-4e90-af0c-bf01fd93b1e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2012f15f4ade88a5c865283e3de2526fc6c1a98918db531fe20e87e5809f3b2,PodSandboxId:f9a5e2370f1b141a45a09fdaca5db063a4936d5ce229ece30d51420e77101827,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765617150825692423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-x9vlt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3722310-4cbe-4697-8045-c8353e07f242,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8082d73b8ece1f67e20c64e2cfab51d335ac46b0b40b55603142da740c91a3,PodSandboxId:ee4e7efa604ef12b36cdd19d812f24148ab40013f8080d40bda6b4383db8b3de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765617150031472376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f6vpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b60db149-95ea-4d92-88d4-958521a5cf75,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c0f467af6def5dd49ebbfbba9a5ba99410764f3415aaf4f3adf2ba77c16191d,PodSandboxId:f74fa673bfef882302672a71d399a5465966cf243a48410013087564e837a849,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765617137916504372,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397c8ea73c97f3674fbdc97e9d7e7383,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538894d57d3ca06d99af82f5f05877513892aa26744c920bec59842908f9af2c,PodSandboxId:47e4cfa9e38fae41f767e03875b44103ab0dd7ac0db7ecf0421933ae7d0242f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765617137886876529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 946988fd2b590065078c2500551ccf5e,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd984a20ab1f80a715a1ade17f930872c415db4424e9b3a206a11cddff88ed81,PodSandboxId:6af2fba32bd98f8091114ab3194cb5e1527b2788f377063378a7ab77dbe8f666,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765617137903829248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb770669e3c9ac4d04b00d62d163fe1c,},Annotations:
map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae132e84c3ae2c02d1dfcf431c4e4d10f6186e4af908262d22d2517a2e18c6b8,PodSandboxId:69ad068c41f7050551ff1f728dffef80fcf60dd8834187e5432815f09eeb554f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765617137874957679,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-246361,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 04942281d89a1eb5c45cc1e401d754fc,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d0e260f3-671f-430f-bc10-72d1155b5fb1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.295266087Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.295562282Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.296638307Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a8f9797-b27a-46ff-b2f8-fd837004fddf name=/runtime.v1.RuntimeService/Version
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.296822479Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a8f9797-b27a-46ff-b2f8-fd837004fddf name=/runtime.v1.RuntimeService/Version
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.298282951Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2de3a8f-da6c-43eb-9150-85dd8ad12e8e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.300158211Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765617404300117951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2de3a8f-da6c-43eb-9150-85dd8ad12e8e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.301460934Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9367a39c-4fa1-4b13-af73-160c6fb198df name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.301559407Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9367a39c-4fa1-4b13-af73-160c6fb198df name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.302061491Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:628516a7d73533daff5dd847807b063734b490b3be17f322069b5862cab3bbda,PodSandboxId:7d574a2f34f68882bdbd41bbed987b533c20016e5acd576de42b45b2f324fd59,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765617259603055897,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b69c078-1088-484d-990b-d8794ed9b2c6,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e37ee820ac55b9c1336b4d106799b475cfaa12f0a5d71aa35438310e3ce95399,PodSandboxId:dced63ac053305c7768f9cd746ec2d926ee40d250a6a06227d94c76fd66672f3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765617240445757059,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: decad740-d6c4-4453-a6a3-0a9ac1f58430,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3c9248017da5b177fe1f69d3216863ca995f332ee106f91d1d36bccc73dfe7,PodSandboxId:41bffb9ec70750921a60e8e0f102b77b9dfdb3057eecc6c8a33d6cc78e2021d9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765617229970386771,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-w2qnr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8b2f9c0b-2e02-4126-9d1d-c1f045ca6f6b,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:628eb745c903d7a888d0fb1d5f9b057d7d5ada312e38a16b6699fd6395681a02,PodSandboxId:2d56fcdd6385824c310e7be7c766a9e95ca4da7b4a8575f4a4d455d21f2e803f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765617199761909099,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rtxd5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 153bc1ac-d8e9-4540-b55a-2728ae1974e8,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c912d2dff111e0461477800bb1445315bf3d43661c24a9aa6e2279fc3617b0,PodSandboxId:4505d18199645c770afd301a1ee3881a4007ba99590175cae7dd91ea1410870f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765617199201265517,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6zvn2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f555fef8-9057-4114-af37-9d7365c0bef2,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b4f2a90f2e3491953330a91cd5ade57ff093679a59bf9569f93a7b6ef247b0,PodSandboxId:4f8767a0b981d20b0bc7e5d2b9a4b04b6bb23a1a44e35e6ac916938b5cb1d481,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765617180774283051,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d548ec6-ac97-4b00-a992-cf50e0728d3c,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89eb96caabf1c75e5905d56d63737b533b45d5de24141cdf765492315cbd1765,PodSandboxId:8b2438312f009f66d4a55e56fba7f01549c3ac03ca6c6148d7686479be1bfe4c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765617159210431241,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pcr8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae35898b-cac4-4c5d-b1f5-3de19fba17ef,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4877853c63147ba15265057e2c32a56362db34c6d6bbc67ceff075e7fe08686,PodSandboxId:4dad2a7dc053f19c5407c74f603ff91fc75fcbc6f12138ce3f39a1b46abafd09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765617158864388340,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b05e28d-a4a6-4e90-af0c-bf01fd93b1e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2012f15f4ade88a5c865283e3de2526fc6c1a98918db531fe20e87e5809f3b2,PodSandboxId:f9a5e2370f1b141a45a09fdaca5db063a4936d5ce229ece30d51420e77101827,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765617150825692423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-x9vlt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3722310-4cbe-4697-8045-c8353e07f242,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8082d73b8ece1f67e20c64e2cfab51d335ac46b0b40b55603142da740c91a3,PodSandboxId:ee4e7efa604ef12b36cdd19d812f24148ab40013f8080d40bda6b4383db8b3de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765617150031472376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f6vpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b60db149-95ea-4d92-88d4-958521a5cf75,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c0f467af6def5dd49ebbfbba9a5ba99410764f3415aaf4f3adf2ba77c16191d,PodSandboxId:f74fa673bfef882302672a71d399a5465966cf243a48410013087564e837a849,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765617137916504372,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397c8ea73c97f3674fbdc97e9d7e7383,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538894d57d3ca06d99af82f5f05877513892aa26744c920bec59842908f9af2c,PodSandboxId:47e4cfa9e38fae41f767e03875b44103ab0dd7ac0db7ecf0421933ae7d0242f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765617137886876529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 946988fd2b590065078c2500551ccf5e,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd984a20ab1f80a715a1ade17f930872c415db4424e9b3a206a11cddff88ed81,PodSandboxId:6af2fba32bd98f8091114ab3194cb5e1527b2788f377063378a7ab77dbe8f666,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765617137903829248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb770669e3c9ac4d04b00d62d163fe1c,},Annotations:
map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae132e84c3ae2c02d1dfcf431c4e4d10f6186e4af908262d22d2517a2e18c6b8,PodSandboxId:69ad068c41f7050551ff1f728dffef80fcf60dd8834187e5432815f09eeb554f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765617137874957679,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-246361,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 04942281d89a1eb5c45cc1e401d754fc,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9367a39c-4fa1-4b13-af73-160c6fb198df name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	628516a7d7353       public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff                           2 minutes ago       Running             nginx                     0                   7d574a2f34f68       nginx                                       default
	e37ee820ac55b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   dced63ac05330       busybox                                     default
	5e3c9248017da       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad             2 minutes ago       Running             controller                0                   41bffb9ec7075       ingress-nginx-controller-85d4c799dd-w2qnr   ingress-nginx
	628eb745c903d       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                             3 minutes ago       Exited              patch                     1                   2d56fcdd63858       ingress-nginx-admission-patch-rtxd5         ingress-nginx
	73c912d2dff11       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              create                    0                   4505d18199645       ingress-nginx-admission-create-6zvn2        ingress-nginx
	86b4f2a90f2e3       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago       Running             minikube-ingress-dns      0                   4f8767a0b981d       kube-ingress-dns-minikube                   kube-system
	89eb96caabf1c       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   8b2438312f009       amd-gpu-device-plugin-pcr8k                 kube-system
	a4877853c6314       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   4dad2a7dc053f       storage-provisioner                         kube-system
	d2012f15f4ade       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   f9a5e2370f1b1       coredns-66bc5c9577-x9vlt                    kube-system
	7a8082d73b8ec       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                             4 minutes ago       Running             kube-proxy                0                   ee4e7efa604ef       kube-proxy-f6vpr                            kube-system
	1c0f467af6def       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                             4 minutes ago       Running             kube-scheduler            0                   f74fa673bfef8       kube-scheduler-addons-246361                kube-system
	fd984a20ab1f8       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                             4 minutes ago       Running             kube-apiserver            0                   6af2fba32bd98       kube-apiserver-addons-246361                kube-system
	538894d57d3ca       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                             4 minutes ago       Running             kube-controller-manager   0                   47e4cfa9e38fa       kube-controller-manager-addons-246361       kube-system
	ae132e84c3ae2       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             4 minutes ago       Running             etcd                      0                   69ad068c41f70       etcd-addons-246361                          kube-system
	
	
	==> coredns [d2012f15f4ade88a5c865283e3de2526fc6c1a98918db531fe20e87e5809f3b2] <==
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	[INFO] Reloading complete
	[INFO] 127.0.0.1:39775 - 62444 "HINFO IN 6417589294913946888.7430898304822193385. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.095304032s
	[INFO] 10.244.0.23:39738 - 8152 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000372247s
	[INFO] 10.244.0.23:41562 - 30552 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002697213s
	[INFO] 10.244.0.23:45553 - 57206 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000202551s
	[INFO] 10.244.0.23:37181 - 44543 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000262199s
	[INFO] 10.244.0.23:43344 - 20817 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138134s
	[INFO] 10.244.0.23:41976 - 9043 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000174261s
	[INFO] 10.244.0.23:58992 - 61906 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001148837s
	[INFO] 10.244.0.23:47672 - 7753 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.003439575s
	[INFO] 10.244.0.27:59950 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000556418s
	[INFO] 10.244.0.27:49848 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00015588s
	
	
	==> describe nodes <==
	Name:               addons-246361
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-246361
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
	                    minikube.k8s.io/name=addons-246361
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_12_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-246361
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:12:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-246361
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:16:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:14:57 +0000   Sat, 13 Dec 2025 09:12:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:14:57 +0000   Sat, 13 Dec 2025 09:12:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:14:57 +0000   Sat, 13 Dec 2025 09:12:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:14:57 +0000   Sat, 13 Dec 2025 09:12:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    addons-246361
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 27894c69ae154bb1a7622eea43d7ca9d
	  System UUID:                27894c69-ae15-4bb1-a762-2eea43d7ca9d
	  Boot ID:                    7ded0609-6263-48ca-9a1f-2025ab0ab76a
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m48s
	  default                     hello-world-app-5d498dc89-9kxwk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-w2qnr    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m7s
	  kube-system                 amd-gpu-device-plugin-pcr8k                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 coredns-66bc5c9577-x9vlt                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m15s
	  kube-system                 etcd-addons-246361                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m20s
	  kube-system                 kube-apiserver-addons-246361                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-controller-manager-addons-246361        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-proxy-f6vpr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-scheduler-addons-246361                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m13s                  kube-proxy       
	  Normal  Starting                 4m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m27s (x8 over 4m27s)  kubelet          Node addons-246361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s (x8 over 4m27s)  kubelet          Node addons-246361 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s (x7 over 4m27s)  kubelet          Node addons-246361 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m20s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m20s                  kubelet          Node addons-246361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s                  kubelet          Node addons-246361 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s                  kubelet          Node addons-246361 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m19s                  kubelet          Node addons-246361 status is now: NodeReady
	  Normal  RegisteredNode           4m16s                  node-controller  Node addons-246361 event: Registered Node addons-246361 in Controller
	
	
	==> dmesg <==
	[  +0.816818] kauditd_printk_skb: 387 callbacks suppressed
	[  +5.459264] kauditd_printk_skb: 7 callbacks suppressed
	[Dec13 09:13] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.259680] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.332277] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.373178] kauditd_printk_skb: 146 callbacks suppressed
	[  +4.239429] kauditd_printk_skb: 52 callbacks suppressed
	[  +6.020570] kauditd_printk_skb: 95 callbacks suppressed
	[  +4.789663] kauditd_printk_skb: 96 callbacks suppressed
	[  +0.000924] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.349381] kauditd_printk_skb: 53 callbacks suppressed
	[Dec13 09:14] kauditd_printk_skb: 47 callbacks suppressed
	[  +9.383074] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.590573] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.813482] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.000041] kauditd_printk_skb: 49 callbacks suppressed
	[  +0.445654] kauditd_printk_skb: 117 callbacks suppressed
	[  +4.456361] kauditd_printk_skb: 98 callbacks suppressed
	[  +1.809988] kauditd_printk_skb: 113 callbacks suppressed
	[  +0.000821] kauditd_printk_skb: 106 callbacks suppressed
	[Dec13 09:15] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000047] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.273519] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.479298] kauditd_printk_skb: 130 callbacks suppressed
	[Dec13 09:16] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [ae132e84c3ae2c02d1dfcf431c4e4d10f6186e4af908262d22d2517a2e18c6b8] <==
	{"level":"warn","ts":"2025-12-13T09:13:27.811921Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"313.795313ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T09:13:27.812077Z","caller":"traceutil/trace.go:172","msg":"trace[516222104] range","detail":"{range_begin:/registry/secrets; range_end:; response_count:0; response_revision:1077; }","duration":"313.959656ms","start":"2025-12-13T09:13:27.498111Z","end":"2025-12-13T09:13:27.812070Z","steps":["trace[516222104] 'agreement among raft nodes before linearized reading'  (duration: 313.560605ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T09:13:27.812155Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T09:13:27.498096Z","time spent":"314.051565ms","remote":"127.0.0.1:51888","response type":"/etcdserverpb.KV/Range","request count":0,"request size":21,"response count":0,"response size":29,"request content":"key:\"/registry/secrets\" limit:1 "}
	{"level":"warn","ts":"2025-12-13T09:13:27.812405Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T09:13:27.445045Z","time spent":"366.695857ms","remote":"127.0.0.1:51994","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1066 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-12-13T09:13:27.812582Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"228.909455ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T09:13:27.812617Z","caller":"traceutil/trace.go:172","msg":"trace[1456460595] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:1077; }","duration":"228.945387ms","start":"2025-12-13T09:13:27.583666Z","end":"2025-12-13T09:13:27.812611Z","steps":["trace[1456460595] 'agreement among raft nodes before linearized reading'  (duration: 228.891456ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T09:13:27.812645Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.428717ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-12-13T09:13:27.812750Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.396624ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T09:13:27.812762Z","caller":"traceutil/trace.go:172","msg":"trace[833198630] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1077; }","duration":"101.547932ms","start":"2025-12-13T09:13:27.711208Z","end":"2025-12-13T09:13:27.812756Z","steps":["trace[833198630] 'agreement among raft nodes before linearized reading'  (duration: 101.412128ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:13:27.812787Z","caller":"traceutil/trace.go:172","msg":"trace[1769544653] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1077; }","duration":"151.412387ms","start":"2025-12-13T09:13:27.661349Z","end":"2025-12-13T09:13:27.812761Z","steps":["trace[1769544653] 'agreement among raft nodes before linearized reading'  (duration: 151.385002ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T09:13:27.812874Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.865002ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T09:13:27.812906Z","caller":"traceutil/trace.go:172","msg":"trace[142762208] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1077; }","duration":"165.896582ms","start":"2025-12-13T09:13:27.647004Z","end":"2025-12-13T09:13:27.812901Z","steps":["trace[142762208] 'agreement among raft nodes before linearized reading'  (duration: 165.856364ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T09:13:27.812957Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.329829ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T09:13:27.812983Z","caller":"traceutil/trace.go:172","msg":"trace[2056939821] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1077; }","duration":"185.35643ms","start":"2025-12-13T09:13:27.627623Z","end":"2025-12-13T09:13:27.812979Z","steps":["trace[2056939821] 'agreement among raft nodes before linearized reading'  (duration: 185.319823ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:13:31.951857Z","caller":"traceutil/trace.go:172","msg":"trace[1920715489] transaction","detail":"{read_only:false; response_revision:1106; number_of_response:1; }","duration":"114.940122ms","start":"2025-12-13T09:13:31.836904Z","end":"2025-12-13T09:13:31.951844Z","steps":["trace[1920715489] 'process raft request'  (duration: 114.834497ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:13:49.782448Z","caller":"traceutil/trace.go:172","msg":"trace[865306237] linearizableReadLoop","detail":"{readStateIndex:1213; appliedIndex:1213; }","duration":"154.451185ms","start":"2025-12-13T09:13:49.627964Z","end":"2025-12-13T09:13:49.782415Z","steps":["trace[865306237] 'read index received'  (duration: 154.446026ms)","trace[865306237] 'applied index is now lower than readState.Index'  (duration: 4.566µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T09:13:49.782666Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.6615ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T09:13:49.782685Z","caller":"traceutil/trace.go:172","msg":"trace[949296125] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1179; }","duration":"154.718895ms","start":"2025-12-13T09:13:49.627961Z","end":"2025-12-13T09:13:49.782680Z","steps":["trace[949296125] 'agreement among raft nodes before linearized reading'  (duration: 154.633501ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:13:49.785905Z","caller":"traceutil/trace.go:172","msg":"trace[571349889] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"162.545309ms","start":"2025-12-13T09:13:49.623347Z","end":"2025-12-13T09:13:49.785892Z","steps":["trace[571349889] 'process raft request'  (duration: 159.543384ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:14:25.180461Z","caller":"traceutil/trace.go:172","msg":"trace[749875535] linearizableReadLoop","detail":"{readStateIndex:1453; appliedIndex:1453; }","duration":"283.59722ms","start":"2025-12-13T09:14:24.896844Z","end":"2025-12-13T09:14:25.180441Z","steps":["trace[749875535] 'read index received'  (duration: 283.544319ms)","trace[749875535] 'applied index is now lower than readState.Index'  (duration: 6.215µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T09:14:25.180637Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"283.771537ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T09:14:25.180661Z","caller":"traceutil/trace.go:172","msg":"trace[525254140] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1412; }","duration":"283.8178ms","start":"2025-12-13T09:14:24.896838Z","end":"2025-12-13T09:14:25.180656Z","steps":["trace[525254140] 'agreement among raft nodes before linearized reading'  (duration: 283.748292ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T09:14:25.181960Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.420483ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/registry-6b586f9694-4vn9j.1880bb7299047c65\" limit:1 ","response":"range_response_count:1 size:826"}
	{"level":"info","ts":"2025-12-13T09:14:25.182012Z","caller":"traceutil/trace.go:172","msg":"trace[1755167697] range","detail":"{range_begin:/registry/events/kube-system/registry-6b586f9694-4vn9j.1880bb7299047c65; range_end:; response_count:1; response_revision:1413; }","duration":"131.48138ms","start":"2025-12-13T09:14:25.050522Z","end":"2025-12-13T09:14:25.182003Z","steps":["trace[1755167697] 'agreement among raft nodes before linearized reading'  (duration: 131.349296ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:14:25.182634Z","caller":"traceutil/trace.go:172","msg":"trace[662986795] transaction","detail":"{read_only:false; response_revision:1413; number_of_response:1; }","duration":"290.398267ms","start":"2025-12-13T09:14:24.892222Z","end":"2025-12-13T09:14:25.182620Z","steps":["trace[662986795] 'process raft request'  (duration: 289.543981ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:16:44 up 4 min,  0 users,  load average: 0.27, 0.85, 0.45
	Linux addons-246361 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [fd984a20ab1f80a715a1ade17f930872c415db4424e9b3a206a11cddff88ed81] <==
	E1213 09:13:11.748227       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.156.47:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.156.47:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.156.47:443: connect: connection refused" logger="UnhandledError"
	E1213 09:13:11.752619       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.156.47:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.156.47:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.156.47:443: connect: connection refused" logger="UnhandledError"
	I1213 09:13:11.859777       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1213 09:14:06.679631       1 conn.go:339] Error on socket receive: read tcp 192.168.39.185:8443->192.168.39.1:46614: use of closed network connection
	E1213 09:14:06.868602       1 conn.go:339] Error on socket receive: read tcp 192.168.39.185:8443->192.168.39.1:46650: use of closed network connection
	I1213 09:14:15.470086       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1213 09:14:15.684514       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.52.103"}
	I1213 09:14:16.245950       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.226.2"}
	E1213 09:15:01.166250       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1213 09:15:02.806230       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1213 09:15:12.773571       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1213 09:15:18.697363       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 09:15:18.697526       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 09:15:18.729081       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 09:15:18.729638       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 09:15:18.738225       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 09:15:18.738323       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 09:15:18.771754       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 09:15:18.771832       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 09:15:18.840855       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 09:15:18.840900       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1213 09:15:19.738278       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1213 09:15:19.841231       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1213 09:15:19.914316       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1213 09:16:43.149037       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.251.12"}
	
	
	==> kube-controller-manager [538894d57d3ca06d99af82f5f05877513892aa26744c920bec59842908f9af2c] <==
	I1213 09:15:28.376196       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1213 09:15:28.594099       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 09:15:28.595248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 09:15:28.870451       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 09:15:28.871610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 09:15:29.932786       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 09:15:29.933935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 09:15:35.222638       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 09:15:35.223651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 09:15:36.653149       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 09:15:36.654449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 09:15:40.917795       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 09:15:40.918826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 09:15:56.597011       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 09:15:56.598093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 09:15:59.780816       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 09:15:59.784555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 09:16:00.325325       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 09:16:00.326347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 09:16:25.415503       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 09:16:25.416992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 09:16:43.012743       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 09:16:43.014833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1213 09:16:43.164164       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1213 09:16:43.168465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [7a8082d73b8ece1f67e20c64e2cfab51d335ac46b0b40b55603142da740c91a3] <==
	I1213 09:12:30.979101       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 09:12:31.186975       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 09:12:31.193535       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.185"]
	E1213 09:12:31.195598       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:12:31.525594       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 09:12:31.525644       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 09:12:31.525668       1 server_linux.go:132] "Using iptables Proxier"
	I1213 09:12:31.543959       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:12:31.545104       1 server.go:527] "Version info" version="v1.34.2"
	I1213 09:12:31.545145       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:12:31.556630       1 config.go:200] "Starting service config controller"
	I1213 09:12:31.557244       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:12:31.557274       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:12:31.557286       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:12:31.557296       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:12:31.557300       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:12:31.559188       1 config.go:309] "Starting node config controller"
	I1213 09:12:31.559239       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:12:31.559257       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:12:31.658111       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:12:31.658792       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:12:31.659837       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1c0f467af6def5dd49ebbfbba9a5ba99410764f3415aaf4f3adf2ba77c16191d] <==
	E1213 09:12:21.220530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 09:12:21.220578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 09:12:21.220790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 09:12:21.221017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 09:12:21.221101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 09:12:21.221144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 09:12:21.224675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 09:12:21.224896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 09:12:21.224947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 09:12:21.224983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 09:12:22.082860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 09:12:22.097331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 09:12:22.110250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 09:12:22.117022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 09:12:22.163270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 09:12:22.183246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 09:12:22.190479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 09:12:22.224060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 09:12:22.239668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 09:12:22.284480       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 09:12:22.344466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 09:12:22.370879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 09:12:22.387559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 09:12:22.753373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1213 09:12:25.211650       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 09:15:22 addons-246361 kubelet[1502]: I1213 09:15:22.049199    1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bebdd078-f41c-4293-a21f-61f2269782c8" path="/var/lib/kubelet/pods/bebdd078-f41c-4293-a21f-61f2269782c8/volumes"
	Dec 13 09:15:24 addons-246361 kubelet[1502]: E1213 09:15:24.374010    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765617324372260704 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 09:15:24 addons-246361 kubelet[1502]: E1213 09:15:24.374131    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765617324372260704 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 09:15:26 addons-246361 kubelet[1502]: I1213 09:15:26.056631    1502 scope.go:117] "RemoveContainer" containerID="bb4911f9d799ec7c39a154e01ac52d6cf318e1e6525d17f41396588b795c3a4b"
	Dec 13 09:15:26 addons-246361 kubelet[1502]: I1213 09:15:26.173386    1502 scope.go:117] "RemoveContainer" containerID="66972a6c4b0b32e33dbc6586aca0c68e24bd547ff529df7918f95aa788de470f"
	Dec 13 09:15:26 addons-246361 kubelet[1502]: I1213 09:15:26.289891    1502 scope.go:117] "RemoveContainer" containerID="a9c32a8bf13a339f14d8693b4abf0e4c242bc5a950af143044c5a37fa739ae66"
	Dec 13 09:15:34 addons-246361 kubelet[1502]: E1213 09:15:34.377002    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765617334376436885 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 09:15:34 addons-246361 kubelet[1502]: E1213 09:15:34.377025    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765617334376436885 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 09:15:44 addons-246361 kubelet[1502]: E1213 09:15:44.380535    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765617344380136066 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 09:15:44 addons-246361 kubelet[1502]: E1213 09:15:44.380566    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765617344380136066 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 09:15:54 addons-246361 kubelet[1502]: E1213 09:15:54.385400    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765617354384989722 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 09:15:54 addons-246361 kubelet[1502]: E1213 09:15:54.385772    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765617354384989722 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 09:16:04 addons-246361 kubelet[1502]: E1213 09:16:04.389689    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765617364388867829 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 09:16:04 addons-246361 kubelet[1502]: E1213 09:16:04.389775    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765617364388867829 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 09:16:13 addons-246361 kubelet[1502]: I1213 09:16:13.042657    1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 09:16:14 addons-246361 kubelet[1502]: E1213 09:16:14.393070    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765617374392568786 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 09:16:14 addons-246361 kubelet[1502]: E1213 09:16:14.393096    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765617374392568786 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 09:16:24 addons-246361 kubelet[1502]: E1213 09:16:24.396979    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765617384396426618 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 09:16:24 addons-246361 kubelet[1502]: E1213 09:16:24.397000    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765617384396426618 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 09:16:34 addons-246361 kubelet[1502]: E1213 09:16:34.400608    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765617394399911444 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 09:16:34 addons-246361 kubelet[1502]: E1213 09:16:34.400645    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765617394399911444 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 09:16:42 addons-246361 kubelet[1502]: I1213 09:16:42.044485    1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-pcr8k" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 09:16:43 addons-246361 kubelet[1502]: I1213 09:16:43.118236    1502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwbs5\" (UniqueName: \"kubernetes.io/projected/3f706e5d-516a-4d79-b9f6-5f8085a46b78-kube-api-access-dwbs5\") pod \"hello-world-app-5d498dc89-9kxwk\" (UID: \"3f706e5d-516a-4d79-b9f6-5f8085a46b78\") " pod="default/hello-world-app-5d498dc89-9kxwk"
	Dec 13 09:16:44 addons-246361 kubelet[1502]: E1213 09:16:44.403848    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765617404403431524 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 13 09:16:44 addons-246361 kubelet[1502]: E1213 09:16:44.403870    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765617404403431524 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	
	
	==> storage-provisioner [a4877853c63147ba15265057e2c32a56362db34c6d6bbc67ceff075e7fe08686] <==
	W1213 09:16:19.198798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:21.202521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:21.207807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:23.212120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:23.220004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:25.224007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:25.229280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:27.232617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:27.237850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:29.241470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:29.246873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:31.251038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:31.260687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:33.265540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:33.270888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:35.274609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:35.280167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:37.283651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:37.290448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:39.293448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:39.301807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:41.305581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:41.310893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:43.316076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:16:43.325569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-246361 -n addons-246361
helpers_test.go:270: (dbg) Run:  kubectl --context addons-246361 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-9kxwk ingress-nginx-admission-create-6zvn2 ingress-nginx-admission-patch-rtxd5
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-246361 describe pod hello-world-app-5d498dc89-9kxwk ingress-nginx-admission-create-6zvn2 ingress-nginx-admission-patch-rtxd5
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-246361 describe pod hello-world-app-5d498dc89-9kxwk ingress-nginx-admission-create-6zvn2 ingress-nginx-admission-patch-rtxd5: exit status 1 (77.991878ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-9kxwk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-246361/192.168.39.185
	Start Time:       Sat, 13 Dec 2025 09:16:43 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dwbs5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dwbs5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-9kxwk to addons-246361
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6zvn2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-rtxd5" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-246361 describe pod hello-world-app-5d498dc89-9kxwk ingress-nginx-admission-create-6zvn2 ingress-nginx-admission-patch-rtxd5: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-246361 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-246361 addons disable ingress-dns --alsologtostderr -v=1: (1.073879141s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-246361 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-246361 addons disable ingress --alsologtostderr -v=1: (7.795214503s)
--- FAIL: TestAddons/parallel/Ingress (159.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (349.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-553391 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 09:27:37.813805  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:27:37.820247  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:27:37.831739  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:27:37.853346  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:27:37.894842  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:27:37.976485  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:27:38.138316  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:27:38.460006  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:27:39.102082  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:27:40.383735  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:27:42.946744  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:27:48.068630  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:27:58.310637  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:28:18.792989  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:28:56.558831  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:28:59.755009  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:30:21.680351  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-553391 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5m47.528124485s)

                                                
                                                
-- stdout --
	* [functional-553391] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "functional-553391" primary control-plane node in "functional-553391" cluster
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.29.1 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-553391 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:776: restart took 5m47.528359843s for "functional-553391" cluster.
I1213 09:31:38.049205  391877 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-553391 -n functional-553391
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-553391 logs -n 25: (1.240239606s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-992282 ssh pgrep buildkitd                                                                                                           │ functional-992282 │ jenkins │ v1.37.0 │ 13 Dec 25 09:23 UTC │                     │
	│ image   │ functional-992282 image build -t localhost/my-image:functional-992282 testdata/build --alsologtostderr                                          │ functional-992282 │ jenkins │ v1.37.0 │ 13 Dec 25 09:23 UTC │ 13 Dec 25 09:23 UTC │
	│ image   │ functional-992282 image ls --format json --alsologtostderr                                                                                      │ functional-992282 │ jenkins │ v1.37.0 │ 13 Dec 25 09:23 UTC │ 13 Dec 25 09:23 UTC │
	│ image   │ functional-992282 image ls --format table --alsologtostderr                                                                                     │ functional-992282 │ jenkins │ v1.37.0 │ 13 Dec 25 09:23 UTC │ 13 Dec 25 09:23 UTC │
	│ image   │ functional-992282 image ls                                                                                                                      │ functional-992282 │ jenkins │ v1.37.0 │ 13 Dec 25 09:23 UTC │ 13 Dec 25 09:23 UTC │
	│ service │ functional-992282 service hello-node-connect --url                                                                                              │ functional-992282 │ jenkins │ v1.37.0 │ 13 Dec 25 09:23 UTC │ 13 Dec 25 09:23 UTC │
	│ delete  │ -p functional-992282                                                                                                                            │ functional-992282 │ jenkins │ v1.37.0 │ 13 Dec 25 09:23 UTC │ 13 Dec 25 09:23 UTC │
	│ start   │ -p functional-553391 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:23 UTC │ 13 Dec 25 09:24 UTC │
	│ start   │ -p functional-553391 --alsologtostderr -v=8                                                                                                     │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:24 UTC │ 13 Dec 25 09:25 UTC │
	│ cache   │ functional-553391 cache add registry.k8s.io/pause:3.1                                                                                           │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ cache   │ functional-553391 cache add registry.k8s.io/pause:3.3                                                                                           │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ cache   │ functional-553391 cache add registry.k8s.io/pause:latest                                                                                        │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ cache   │ functional-553391 cache add minikube-local-cache-test:functional-553391                                                                         │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ cache   │ functional-553391 cache delete minikube-local-cache-test:functional-553391                                                                      │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ cache   │ list                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ ssh     │ functional-553391 ssh sudo crictl images                                                                                                        │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ ssh     │ functional-553391 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                              │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ ssh     │ functional-553391 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │                     │
	│ cache   │ functional-553391 cache reload                                                                                                                  │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ ssh     │ functional-553391 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ kubectl │ functional-553391 kubectl -- --context functional-553391 get pods                                                                               │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ start   │ -p functional-553391 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                        │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:25:50
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:25:50.578782  400166 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:25:50.578875  400166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:25:50.578878  400166 out.go:374] Setting ErrFile to fd 2...
	I1213 09:25:50.578881  400166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:25:50.579661  400166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 09:25:50.580686  400166 out.go:368] Setting JSON to false
	I1213 09:25:50.581687  400166 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4100,"bootTime":1765613851,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:25:50.581810  400166 start.go:143] virtualization: kvm guest
	I1213 09:25:50.583582  400166 out.go:179] * [functional-553391] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:25:50.585200  400166 notify.go:221] Checking for updates...
	I1213 09:25:50.585219  400166 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 09:25:50.586708  400166 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:25:50.588156  400166 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 09:25:50.589812  400166 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	I1213 09:25:50.591279  400166 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:25:50.592675  400166 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:25:50.594447  400166 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:25:50.594606  400166 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:25:50.626856  400166 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 09:25:50.628023  400166 start.go:309] selected driver: kvm2
	I1213 09:25:50.628032  400166 start.go:927] validating driver "kvm2" against &{Name:functional-553391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-553391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:25:50.628144  400166 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:25:50.629095  400166 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:25:50.629114  400166 cni.go:84] Creating CNI manager for ""
	I1213 09:25:50.629164  400166 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 09:25:50.629214  400166 start.go:353] cluster config:
	{Name:functional-553391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-553391 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:25:50.629314  400166 iso.go:125] acquiring lock: {Name:mk4ce8bfab58620efe86d1c7a68d79ed9c81b6ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:25:50.630941  400166 out.go:179] * Starting "functional-553391" primary control-plane node in "functional-553391" cluster
	I1213 09:25:50.632366  400166 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 09:25:50.632396  400166 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-387918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1213 09:25:50.632417  400166 cache.go:65] Caching tarball of preloaded images
	I1213 09:25:50.632524  400166 preload.go:238] Found /home/jenkins/minikube-integration/22127-387918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 09:25:50.632535  400166 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 09:25:50.632627  400166 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/config.json ...
	I1213 09:25:50.632839  400166 start.go:360] acquireMachinesLock for functional-553391: {Name:mk911c6c71130df32abbe489ec2f7be251c727ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 09:25:50.632888  400166 start.go:364] duration metric: took 34.74µs to acquireMachinesLock for "functional-553391"
	I1213 09:25:50.632909  400166 start.go:96] Skipping create...Using existing machine configuration
	I1213 09:25:50.632914  400166 fix.go:54] fixHost starting: 
	I1213 09:25:50.635203  400166 fix.go:112] recreateIfNeeded on functional-553391: state=Running err=<nil>
	W1213 09:25:50.635223  400166 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 09:25:50.636899  400166 out.go:252] * Updating the running kvm2 "functional-553391" VM ...
	I1213 09:25:50.636927  400166 machine.go:94] provisionDockerMachine start ...
	I1213 09:25:50.639435  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:50.639770  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:50.639784  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:50.639978  400166 main.go:143] libmachine: Using SSH client type: native
	I1213 09:25:50.640182  400166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1213 09:25:50.640186  400166 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:25:50.744492  400166 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-553391
	
	I1213 09:25:50.744521  400166 buildroot.go:166] provisioning hostname "functional-553391"
	I1213 09:25:50.747512  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:50.747889  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:50.747914  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:50.748134  400166 main.go:143] libmachine: Using SSH client type: native
	I1213 09:25:50.748350  400166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1213 09:25:50.748356  400166 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-553391 && echo "functional-553391" | sudo tee /etc/hostname
	I1213 09:25:50.870672  400166 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-553391
	
	I1213 09:25:50.873919  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:50.874404  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:50.874432  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:50.874633  400166 main.go:143] libmachine: Using SSH client type: native
	I1213 09:25:50.874897  400166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1213 09:25:50.874912  400166 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-553391' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-553391/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-553391' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:25:50.979254  400166 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:25:50.979277  400166 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22127-387918/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-387918/.minikube}
	I1213 09:25:50.979299  400166 buildroot.go:174] setting up certificates
	I1213 09:25:50.979308  400166 provision.go:84] configureAuth start
	I1213 09:25:50.981984  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:50.982472  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:50.982493  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:50.984622  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:50.985039  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:50.985077  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:50.985285  400166 provision.go:143] copyHostCerts
	I1213 09:25:50.985368  400166 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-387918/.minikube/ca.pem, removing ...
	I1213 09:25:50.985383  400166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-387918/.minikube/ca.pem
	I1213 09:25:50.985448  400166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-387918/.minikube/ca.pem (1078 bytes)
	I1213 09:25:50.985541  400166 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-387918/.minikube/cert.pem, removing ...
	I1213 09:25:50.985544  400166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-387918/.minikube/cert.pem
	I1213 09:25:50.985570  400166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-387918/.minikube/cert.pem (1123 bytes)
	I1213 09:25:50.985621  400166 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-387918/.minikube/key.pem, removing ...
	I1213 09:25:50.985624  400166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-387918/.minikube/key.pem
	I1213 09:25:50.985646  400166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-387918/.minikube/key.pem (1675 bytes)
	I1213 09:25:50.985695  400166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-387918/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca-key.pem org=jenkins.functional-553391 san=[127.0.0.1 192.168.39.38 functional-553391 localhost minikube]
	I1213 09:25:51.017750  400166 provision.go:177] copyRemoteCerts
	I1213 09:25:51.017825  400166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:25:51.020427  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:51.020762  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:51.020779  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:51.020885  400166 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/functional-553391/id_rsa Username:docker}
	I1213 09:25:51.105119  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 09:25:51.140258  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 09:25:51.175128  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 09:25:51.209100  400166 provision.go:87] duration metric: took 229.777147ms to configureAuth
	I1213 09:25:51.209124  400166 buildroot.go:189] setting minikube options for container-runtime
	I1213 09:25:51.209396  400166 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:25:51.212709  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:51.213077  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:51.213103  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:51.213303  400166 main.go:143] libmachine: Using SSH client type: native
	I1213 09:25:51.213529  400166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1213 09:25:51.213538  400166 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 09:25:56.868537  400166 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 09:25:56.868559  400166 machine.go:97] duration metric: took 6.231623162s to provisionDockerMachine
	I1213 09:25:56.868585  400166 start.go:293] postStartSetup for "functional-553391" (driver="kvm2")
	I1213 09:25:56.868599  400166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:25:56.868709  400166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:25:56.872122  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:56.872626  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:56.872646  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:56.872895  400166 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/functional-553391/id_rsa Username:docker}
	I1213 09:25:56.956963  400166 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:25:56.961742  400166 info.go:137] Remote host: Buildroot 2025.02
	I1213 09:25:56.961761  400166 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-387918/.minikube/addons for local assets ...
	I1213 09:25:56.961840  400166 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-387918/.minikube/files for local assets ...
	I1213 09:25:56.961906  400166 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/ssl/certs/3918772.pem -> 3918772.pem in /etc/ssl/certs
	I1213 09:25:56.961987  400166 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/test/nested/copy/391877/hosts -> hosts in /etc/test/nested/copy/391877
	I1213 09:25:56.962040  400166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/391877
	I1213 09:25:56.973547  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/ssl/certs/3918772.pem --> /etc/ssl/certs/3918772.pem (1708 bytes)
	I1213 09:25:57.003353  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/test/nested/copy/391877/hosts --> /etc/test/nested/copy/391877/hosts (40 bytes)
	I1213 09:25:57.033885  400166 start.go:296] duration metric: took 165.284089ms for postStartSetup
	I1213 09:25:57.033925  400166 fix.go:56] duration metric: took 6.401010004s for fixHost
	I1213 09:25:57.037169  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:57.037596  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:57.037616  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:57.037803  400166 main.go:143] libmachine: Using SSH client type: native
	I1213 09:25:57.038089  400166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1213 09:25:57.038097  400166 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1213 09:25:57.141385  400166 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765617957.136517301
	
	I1213 09:25:57.141400  400166 fix.go:216] guest clock: 1765617957.136517301
	I1213 09:25:57.141407  400166 fix.go:229] Guest: 2025-12-13 09:25:57.136517301 +0000 UTC Remote: 2025-12-13 09:25:57.03392761 +0000 UTC m=+6.508038433 (delta=102.589691ms)
	I1213 09:25:57.141423  400166 fix.go:200] guest clock delta is within tolerance: 102.589691ms
	I1213 09:25:57.141427  400166 start.go:83] releasing machines lock for "functional-553391", held for 6.508533532s
	I1213 09:25:57.144511  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:57.145023  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:57.145040  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:57.145663  400166 ssh_runner.go:195] Run: cat /version.json
	I1213 09:25:57.145777  400166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:25:57.149156  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:57.149589  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:57.149608  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:57.149667  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:57.149808  400166 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/functional-553391/id_rsa Username:docker}
	I1213 09:25:57.150178  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:57.150194  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:57.150412  400166 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/functional-553391/id_rsa Username:docker}
	I1213 09:25:57.258626  400166 ssh_runner.go:195] Run: systemctl --version
	I1213 09:25:57.302306  400166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 09:25:57.500201  400166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:25:57.512598  400166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:25:57.512679  400166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:25:57.533867  400166 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 09:25:57.533886  400166 start.go:496] detecting cgroup driver to use...
	I1213 09:25:57.533967  400166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 09:25:57.578344  400166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:25:57.619042  400166 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:25:57.619141  400166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:25:57.670646  400166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:25:57.704879  400166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:25:58.019927  400166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:25:58.234680  400166 docker.go:234] disabling docker service ...
	I1213 09:25:58.234757  400166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:25:58.273213  400166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:25:58.290651  400166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:25:58.486138  400166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:25:58.671126  400166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:25:58.688572  400166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:25:58.712791  400166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 09:25:58.712847  400166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:25:58.725948  400166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 09:25:58.726012  400166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:25:58.739009  400166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:25:58.751973  400166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:25:58.764818  400166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:25:58.779438  400166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:25:58.793404  400166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:25:58.807428  400166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:25:58.820499  400166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:25:58.831715  400166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:25:58.843085  400166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:25:59.015761  400166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 09:27:29.568760  400166 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.552962989s)
	I1213 09:27:29.568832  400166 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 09:27:29.568898  400166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 09:27:29.575284  400166 start.go:564] Will wait 60s for crictl version
	I1213 09:27:29.575370  400166 ssh_runner.go:195] Run: which crictl
	I1213 09:27:29.580649  400166 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 09:27:29.620124  400166 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 09:27:29.620224  400166 ssh_runner.go:195] Run: crio --version
	I1213 09:27:29.649647  400166 ssh_runner.go:195] Run: crio --version
	I1213 09:27:29.682263  400166 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.29.1 ...
	I1213 09:27:29.686848  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:27:29.687397  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:27:29.687430  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:27:29.687620  400166 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 09:27:29.694396  400166 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 09:27:29.695988  400166 kubeadm.go:884] updating cluster {Name:functional-553391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-beta.0 ClusterName:functional-553391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
tString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:27:29.696163  400166 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 09:27:29.696228  400166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:27:29.736630  400166 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:27:29.736645  400166 crio.go:433] Images already preloaded, skipping extraction
	I1213 09:27:29.736720  400166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:27:29.769476  400166 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:27:29.769493  400166 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:27:29.769501  400166 kubeadm.go:935] updating node { 192.168.39.38 8441 v1.35.0-beta.0 crio true true} ...
	I1213 09:27:29.769637  400166 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-553391 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-553391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:27:29.769723  400166 ssh_runner.go:195] Run: crio config
	I1213 09:27:29.817350  400166 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 09:27:29.817373  400166 cni.go:84] Creating CNI manager for ""
	I1213 09:27:29.817383  400166 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 09:27:29.817391  400166 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 09:27:29.817412  400166 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-553391 NodeName:functional-553391 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletCon
figOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:27:29.817530  400166 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-553391"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.38"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:27:29.817592  400166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 09:27:29.830697  400166 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:27:29.830792  400166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:27:29.842880  400166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1213 09:27:29.864888  400166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 09:27:29.890269  400166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2074 bytes)
	I1213 09:27:29.914841  400166 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I1213 09:27:29.919514  400166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:27:30.104888  400166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:27:30.122863  400166 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391 for IP: 192.168.39.38
	I1213 09:27:30.122881  400166 certs.go:195] generating shared ca certs ...
	I1213 09:27:30.122900  400166 certs.go:227] acquiring lock for ca certs: {Name:mkd63ae6418df38b62936a9f8faa40fdd87e4397 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:27:30.123141  400166 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-387918/.minikube/ca.key
	I1213 09:27:30.123210  400166 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.key
	I1213 09:27:30.123218  400166 certs.go:257] generating profile certs ...
	I1213 09:27:30.123306  400166 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.key
	I1213 09:27:30.123366  400166 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/apiserver.key.8db172b3
	I1213 09:27:30.123403  400166 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/proxy-client.key
	I1213 09:27:30.123516  400166 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/391877.pem (1338 bytes)
	W1213 09:27:30.123548  400166 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-387918/.minikube/certs/391877_empty.pem, impossibly tiny 0 bytes
	I1213 09:27:30.123555  400166 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:27:30.123576  400166 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem (1078 bytes)
	I1213 09:27:30.123595  400166 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:27:30.123614  400166 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/key.pem (1675 bytes)
	I1213 09:27:30.123658  400166 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/ssl/certs/3918772.pem (1708 bytes)
	I1213 09:27:30.124476  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:27:30.155078  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:27:30.185398  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:27:30.216509  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 09:27:30.246683  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 09:27:30.277241  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 09:27:30.309147  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:27:30.339496  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 09:27:30.369722  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:27:30.401275  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/certs/391877.pem --> /usr/share/ca-certificates/391877.pem (1338 bytes)
	I1213 09:27:30.433999  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/ssl/certs/3918772.pem --> /usr/share/ca-certificates/3918772.pem (1708 bytes)
	I1213 09:27:30.465419  400166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:27:30.487273  400166 ssh_runner.go:195] Run: openssl version
	I1213 09:27:30.494392  400166 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3918772.pem
	I1213 09:27:30.507578  400166 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3918772.pem /etc/ssl/certs/3918772.pem
	I1213 09:27:30.520253  400166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3918772.pem
	I1213 09:27:30.526202  400166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 09:23 /usr/share/ca-certificates/3918772.pem
	I1213 09:27:30.526262  400166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3918772.pem
	I1213 09:27:30.534171  400166 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:27:30.546711  400166 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:27:30.559987  400166 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:27:30.572651  400166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:27:30.578416  400166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:27:30.578482  400166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:27:30.586465  400166 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:27:30.598415  400166 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/391877.pem
	I1213 09:27:30.610812  400166 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/391877.pem /etc/ssl/certs/391877.pem
	I1213 09:27:30.623291  400166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391877.pem
	I1213 09:27:30.628841  400166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 09:23 /usr/share/ca-certificates/391877.pem
	I1213 09:27:30.628898  400166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391877.pem
	I1213 09:27:30.636824  400166 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:27:30.648894  400166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:27:30.654220  400166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 09:27:30.661748  400166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 09:27:30.668972  400166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 09:27:30.676373  400166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 09:27:30.683574  400166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 09:27:30.691120  400166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 09:27:30.699035  400166 kubeadm.go:401] StartCluster: {Name:functional-553391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0-beta.0 ClusterName:functional-553391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountSt
ring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:27:30.699121  400166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 09:27:30.699188  400166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:27:30.738856  400166 cri.go:89] found id: "74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9"
	I1213 09:27:30.738871  400166 cri.go:89] found id: "b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421"
	I1213 09:27:30.738873  400166 cri.go:89] found id: "abef72de0b38ca6ec98975d20a6d31b464fd8ce72c3f85bebc27de9ee873efce"
	I1213 09:27:30.738876  400166 cri.go:89] found id: "15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad"
	I1213 09:27:30.738878  400166 cri.go:89] found id: "43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb"
	I1213 09:27:30.738880  400166 cri.go:89] found id: "bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695"
	I1213 09:27:30.738882  400166 cri.go:89] found id: "a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c"
	I1213 09:27:30.738883  400166 cri.go:89] found id: ""
	I1213 09:27:30.738935  400166 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-553391 -n functional-553391
helpers_test.go:270: (dbg) Run:  kubectl --context functional-553391 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (349.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (1.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-553391 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:848: kube-scheduler is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:False} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.38 PodIP:192.168.39.38 StartTime:2025-12-13 09:27:32 +0000 UTC ContainerStatuses:[{Name:kube-scheduler State:{Waiting:<nil> Running:<nil> Terminated:0xc0003062a0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:1 Image:registry.k8s.io/kube-scheduler:v1.35.0-beta.0 ImageID:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46 ContainerID:cri-o://15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad}]}
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-553391 -n functional-553391
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-553391 logs -n 25: (1.199048709s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-992282 ssh pgrep buildkitd                                                                                                           │ functional-992282 │ jenkins │ v1.37.0 │ 13 Dec 25 09:23 UTC │                     │
	│ image   │ functional-992282 image build -t localhost/my-image:functional-992282 testdata/build --alsologtostderr                                          │ functional-992282 │ jenkins │ v1.37.0 │ 13 Dec 25 09:23 UTC │ 13 Dec 25 09:23 UTC │
	│ image   │ functional-992282 image ls --format json --alsologtostderr                                                                                      │ functional-992282 │ jenkins │ v1.37.0 │ 13 Dec 25 09:23 UTC │ 13 Dec 25 09:23 UTC │
	│ image   │ functional-992282 image ls --format table --alsologtostderr                                                                                     │ functional-992282 │ jenkins │ v1.37.0 │ 13 Dec 25 09:23 UTC │ 13 Dec 25 09:23 UTC │
	│ image   │ functional-992282 image ls                                                                                                                      │ functional-992282 │ jenkins │ v1.37.0 │ 13 Dec 25 09:23 UTC │ 13 Dec 25 09:23 UTC │
	│ service │ functional-992282 service hello-node-connect --url                                                                                              │ functional-992282 │ jenkins │ v1.37.0 │ 13 Dec 25 09:23 UTC │ 13 Dec 25 09:23 UTC │
	│ delete  │ -p functional-992282                                                                                                                            │ functional-992282 │ jenkins │ v1.37.0 │ 13 Dec 25 09:23 UTC │ 13 Dec 25 09:23 UTC │
	│ start   │ -p functional-553391 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:23 UTC │ 13 Dec 25 09:24 UTC │
	│ start   │ -p functional-553391 --alsologtostderr -v=8                                                                                                     │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:24 UTC │ 13 Dec 25 09:25 UTC │
	│ cache   │ functional-553391 cache add registry.k8s.io/pause:3.1                                                                                           │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ cache   │ functional-553391 cache add registry.k8s.io/pause:3.3                                                                                           │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ cache   │ functional-553391 cache add registry.k8s.io/pause:latest                                                                                        │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ cache   │ functional-553391 cache add minikube-local-cache-test:functional-553391                                                                         │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ cache   │ functional-553391 cache delete minikube-local-cache-test:functional-553391                                                                      │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ cache   │ list                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ ssh     │ functional-553391 ssh sudo crictl images                                                                                                        │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ ssh     │ functional-553391 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                              │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ ssh     │ functional-553391 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │                     │
	│ cache   │ functional-553391 cache reload                                                                                                                  │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ ssh     │ functional-553391 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ kubectl │ functional-553391 kubectl -- --context functional-553391 get pods                                                                               │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │ 13 Dec 25 09:25 UTC │
	│ start   │ -p functional-553391 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                        │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:25 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:25:50
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:25:50.578782  400166 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:25:50.578875  400166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:25:50.578878  400166 out.go:374] Setting ErrFile to fd 2...
	I1213 09:25:50.578881  400166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:25:50.579661  400166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 09:25:50.580686  400166 out.go:368] Setting JSON to false
	I1213 09:25:50.581687  400166 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4100,"bootTime":1765613851,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:25:50.581810  400166 start.go:143] virtualization: kvm guest
	I1213 09:25:50.583582  400166 out.go:179] * [functional-553391] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:25:50.585200  400166 notify.go:221] Checking for updates...
	I1213 09:25:50.585219  400166 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 09:25:50.586708  400166 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:25:50.588156  400166 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 09:25:50.589812  400166 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	I1213 09:25:50.591279  400166 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:25:50.592675  400166 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:25:50.594447  400166 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:25:50.594606  400166 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:25:50.626856  400166 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 09:25:50.628023  400166 start.go:309] selected driver: kvm2
	I1213 09:25:50.628032  400166 start.go:927] validating driver "kvm2" against &{Name:functional-553391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-553391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:25:50.628144  400166 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:25:50.629095  400166 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:25:50.629114  400166 cni.go:84] Creating CNI manager for ""
	I1213 09:25:50.629164  400166 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 09:25:50.629214  400166 start.go:353] cluster config:
	{Name:functional-553391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-553391 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:25:50.629314  400166 iso.go:125] acquiring lock: {Name:mk4ce8bfab58620efe86d1c7a68d79ed9c81b6ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:25:50.630941  400166 out.go:179] * Starting "functional-553391" primary control-plane node in "functional-553391" cluster
	I1213 09:25:50.632366  400166 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 09:25:50.632396  400166 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-387918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1213 09:25:50.632417  400166 cache.go:65] Caching tarball of preloaded images
	I1213 09:25:50.632524  400166 preload.go:238] Found /home/jenkins/minikube-integration/22127-387918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 09:25:50.632535  400166 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 09:25:50.632627  400166 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/config.json ...
	I1213 09:25:50.632839  400166 start.go:360] acquireMachinesLock for functional-553391: {Name:mk911c6c71130df32abbe489ec2f7be251c727ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 09:25:50.632888  400166 start.go:364] duration metric: took 34.74µs to acquireMachinesLock for "functional-553391"
	I1213 09:25:50.632909  400166 start.go:96] Skipping create...Using existing machine configuration
	I1213 09:25:50.632914  400166 fix.go:54] fixHost starting: 
	I1213 09:25:50.635203  400166 fix.go:112] recreateIfNeeded on functional-553391: state=Running err=<nil>
	W1213 09:25:50.635223  400166 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 09:25:50.636899  400166 out.go:252] * Updating the running kvm2 "functional-553391" VM ...
	I1213 09:25:50.636927  400166 machine.go:94] provisionDockerMachine start ...
	I1213 09:25:50.639435  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:50.639770  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:50.639784  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:50.639978  400166 main.go:143] libmachine: Using SSH client type: native
	I1213 09:25:50.640182  400166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1213 09:25:50.640186  400166 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:25:50.744492  400166 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-553391
	
	I1213 09:25:50.744521  400166 buildroot.go:166] provisioning hostname "functional-553391"
	I1213 09:25:50.747512  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:50.747889  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:50.747914  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:50.748134  400166 main.go:143] libmachine: Using SSH client type: native
	I1213 09:25:50.748350  400166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1213 09:25:50.748356  400166 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-553391 && echo "functional-553391" | sudo tee /etc/hostname
	I1213 09:25:50.870672  400166 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-553391
	
	I1213 09:25:50.873919  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:50.874404  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:50.874432  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:50.874633  400166 main.go:143] libmachine: Using SSH client type: native
	I1213 09:25:50.874897  400166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1213 09:25:50.874912  400166 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-553391' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-553391/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-553391' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:25:50.979254  400166 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:25:50.979277  400166 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22127-387918/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-387918/.minikube}
	I1213 09:25:50.979299  400166 buildroot.go:174] setting up certificates
	I1213 09:25:50.979308  400166 provision.go:84] configureAuth start
	I1213 09:25:50.981984  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:50.982472  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:50.982493  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:50.984622  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:50.985039  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:50.985077  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:50.985285  400166 provision.go:143] copyHostCerts
	I1213 09:25:50.985368  400166 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-387918/.minikube/ca.pem, removing ...
	I1213 09:25:50.985383  400166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-387918/.minikube/ca.pem
	I1213 09:25:50.985448  400166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-387918/.minikube/ca.pem (1078 bytes)
	I1213 09:25:50.985541  400166 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-387918/.minikube/cert.pem, removing ...
	I1213 09:25:50.985544  400166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-387918/.minikube/cert.pem
	I1213 09:25:50.985570  400166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-387918/.minikube/cert.pem (1123 bytes)
	I1213 09:25:50.985621  400166 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-387918/.minikube/key.pem, removing ...
	I1213 09:25:50.985624  400166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-387918/.minikube/key.pem
	I1213 09:25:50.985646  400166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-387918/.minikube/key.pem (1675 bytes)
	I1213 09:25:50.985695  400166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-387918/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca-key.pem org=jenkins.functional-553391 san=[127.0.0.1 192.168.39.38 functional-553391 localhost minikube]
	I1213 09:25:51.017750  400166 provision.go:177] copyRemoteCerts
	I1213 09:25:51.017825  400166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:25:51.020427  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:51.020762  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:51.020779  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:51.020885  400166 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/functional-553391/id_rsa Username:docker}
	I1213 09:25:51.105119  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 09:25:51.140258  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 09:25:51.175128  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 09:25:51.209100  400166 provision.go:87] duration metric: took 229.777147ms to configureAuth
	I1213 09:25:51.209124  400166 buildroot.go:189] setting minikube options for container-runtime
	I1213 09:25:51.209396  400166 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:25:51.212709  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:51.213077  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:51.213103  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:51.213303  400166 main.go:143] libmachine: Using SSH client type: native
	I1213 09:25:51.213529  400166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1213 09:25:51.213538  400166 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 09:25:56.868537  400166 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 09:25:56.868559  400166 machine.go:97] duration metric: took 6.231623162s to provisionDockerMachine
	I1213 09:25:56.868585  400166 start.go:293] postStartSetup for "functional-553391" (driver="kvm2")
	I1213 09:25:56.868599  400166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:25:56.868709  400166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:25:56.872122  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:56.872626  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:56.872646  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:56.872895  400166 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/functional-553391/id_rsa Username:docker}
	I1213 09:25:56.956963  400166 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:25:56.961742  400166 info.go:137] Remote host: Buildroot 2025.02
	I1213 09:25:56.961761  400166 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-387918/.minikube/addons for local assets ...
	I1213 09:25:56.961840  400166 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-387918/.minikube/files for local assets ...
	I1213 09:25:56.961906  400166 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/ssl/certs/3918772.pem -> 3918772.pem in /etc/ssl/certs
	I1213 09:25:56.961987  400166 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/test/nested/copy/391877/hosts -> hosts in /etc/test/nested/copy/391877
	I1213 09:25:56.962040  400166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/391877
	I1213 09:25:56.973547  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/ssl/certs/3918772.pem --> /etc/ssl/certs/3918772.pem (1708 bytes)
	I1213 09:25:57.003353  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/test/nested/copy/391877/hosts --> /etc/test/nested/copy/391877/hosts (40 bytes)
	I1213 09:25:57.033885  400166 start.go:296] duration metric: took 165.284089ms for postStartSetup
	I1213 09:25:57.033925  400166 fix.go:56] duration metric: took 6.401010004s for fixHost
	I1213 09:25:57.037169  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:57.037596  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:57.037616  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:57.037803  400166 main.go:143] libmachine: Using SSH client type: native
	I1213 09:25:57.038089  400166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I1213 09:25:57.038097  400166 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1213 09:25:57.141385  400166 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765617957.136517301
	
	I1213 09:25:57.141400  400166 fix.go:216] guest clock: 1765617957.136517301
	I1213 09:25:57.141407  400166 fix.go:229] Guest: 2025-12-13 09:25:57.136517301 +0000 UTC Remote: 2025-12-13 09:25:57.03392761 +0000 UTC m=+6.508038433 (delta=102.589691ms)
	I1213 09:25:57.141423  400166 fix.go:200] guest clock delta is within tolerance: 102.589691ms
	I1213 09:25:57.141427  400166 start.go:83] releasing machines lock for "functional-553391", held for 6.508533532s
	I1213 09:25:57.144511  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:57.145023  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:57.145040  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:57.145663  400166 ssh_runner.go:195] Run: cat /version.json
	I1213 09:25:57.145777  400166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:25:57.149156  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:57.149589  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:57.149608  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:57.149667  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:57.149808  400166 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/functional-553391/id_rsa Username:docker}
	I1213 09:25:57.150178  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:25:57.150194  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:25:57.150412  400166 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/functional-553391/id_rsa Username:docker}
	I1213 09:25:57.258626  400166 ssh_runner.go:195] Run: systemctl --version
	I1213 09:25:57.302306  400166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 09:25:57.500201  400166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:25:57.512598  400166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:25:57.512679  400166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:25:57.533867  400166 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 09:25:57.533886  400166 start.go:496] detecting cgroup driver to use...
	I1213 09:25:57.533967  400166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 09:25:57.578344  400166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:25:57.619042  400166 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:25:57.619141  400166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:25:57.670646  400166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:25:57.704879  400166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:25:58.019927  400166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:25:58.234680  400166 docker.go:234] disabling docker service ...
	I1213 09:25:58.234757  400166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:25:58.273213  400166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:25:58.290651  400166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:25:58.486138  400166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:25:58.671126  400166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:25:58.688572  400166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:25:58.712791  400166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 09:25:58.712847  400166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:25:58.725948  400166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 09:25:58.726012  400166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:25:58.739009  400166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:25:58.751973  400166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:25:58.764818  400166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:25:58.779438  400166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:25:58.793404  400166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:25:58.807428  400166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:25:58.820499  400166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:25:58.831715  400166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:25:58.843085  400166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:25:59.015761  400166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 09:27:29.568760  400166 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.552962989s)
	I1213 09:27:29.568832  400166 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 09:27:29.568898  400166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 09:27:29.575284  400166 start.go:564] Will wait 60s for crictl version
	I1213 09:27:29.575370  400166 ssh_runner.go:195] Run: which crictl
	I1213 09:27:29.580649  400166 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 09:27:29.620124  400166 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 09:27:29.620224  400166 ssh_runner.go:195] Run: crio --version
	I1213 09:27:29.649647  400166 ssh_runner.go:195] Run: crio --version
	I1213 09:27:29.682263  400166 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.29.1 ...
	I1213 09:27:29.686848  400166 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:27:29.687397  400166 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
	I1213 09:27:29.687430  400166 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
	I1213 09:27:29.687620  400166 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 09:27:29.694396  400166 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 09:27:29.695988  400166 kubeadm.go:884] updating cluster {Name:functional-553391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-beta.0 ClusterName:functional-553391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
tString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:27:29.696163  400166 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 09:27:29.696228  400166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:27:29.736630  400166 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:27:29.736645  400166 crio.go:433] Images already preloaded, skipping extraction
	I1213 09:27:29.736720  400166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:27:29.769476  400166 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:27:29.769493  400166 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:27:29.769501  400166 kubeadm.go:935] updating node { 192.168.39.38 8441 v1.35.0-beta.0 crio true true} ...
	I1213 09:27:29.769637  400166 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-553391 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-553391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:27:29.769723  400166 ssh_runner.go:195] Run: crio config
	I1213 09:27:29.817350  400166 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 09:27:29.817373  400166 cni.go:84] Creating CNI manager for ""
	I1213 09:27:29.817383  400166 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 09:27:29.817391  400166 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 09:27:29.817412  400166 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-553391 NodeName:functional-553391 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletCon
figOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:27:29.817530  400166 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-553391"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.38"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:27:29.817592  400166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 09:27:29.830697  400166 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:27:29.830792  400166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:27:29.842880  400166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1213 09:27:29.864888  400166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 09:27:29.890269  400166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2074 bytes)
	I1213 09:27:29.914841  400166 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I1213 09:27:29.919514  400166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:27:30.104888  400166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:27:30.122863  400166 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391 for IP: 192.168.39.38
	I1213 09:27:30.122881  400166 certs.go:195] generating shared ca certs ...
	I1213 09:27:30.122900  400166 certs.go:227] acquiring lock for ca certs: {Name:mkd63ae6418df38b62936a9f8faa40fdd87e4397 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:27:30.123141  400166 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-387918/.minikube/ca.key
	I1213 09:27:30.123210  400166 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.key
	I1213 09:27:30.123218  400166 certs.go:257] generating profile certs ...
	I1213 09:27:30.123306  400166 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.key
	I1213 09:27:30.123366  400166 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/apiserver.key.8db172b3
	I1213 09:27:30.123403  400166 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/proxy-client.key
	I1213 09:27:30.123516  400166 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/391877.pem (1338 bytes)
	W1213 09:27:30.123548  400166 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-387918/.minikube/certs/391877_empty.pem, impossibly tiny 0 bytes
	I1213 09:27:30.123555  400166 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:27:30.123576  400166 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem (1078 bytes)
	I1213 09:27:30.123595  400166 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:27:30.123614  400166 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/key.pem (1675 bytes)
	I1213 09:27:30.123658  400166 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/ssl/certs/3918772.pem (1708 bytes)
	I1213 09:27:30.124476  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:27:30.155078  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:27:30.185398  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:27:30.216509  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 09:27:30.246683  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 09:27:30.277241  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 09:27:30.309147  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:27:30.339496  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 09:27:30.369722  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:27:30.401275  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/certs/391877.pem --> /usr/share/ca-certificates/391877.pem (1338 bytes)
	I1213 09:27:30.433999  400166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/ssl/certs/3918772.pem --> /usr/share/ca-certificates/3918772.pem (1708 bytes)
	I1213 09:27:30.465419  400166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:27:30.487273  400166 ssh_runner.go:195] Run: openssl version
	I1213 09:27:30.494392  400166 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3918772.pem
	I1213 09:27:30.507578  400166 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3918772.pem /etc/ssl/certs/3918772.pem
	I1213 09:27:30.520253  400166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3918772.pem
	I1213 09:27:30.526202  400166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 09:23 /usr/share/ca-certificates/3918772.pem
	I1213 09:27:30.526262  400166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3918772.pem
	I1213 09:27:30.534171  400166 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:27:30.546711  400166 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:27:30.559987  400166 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:27:30.572651  400166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:27:30.578416  400166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:27:30.578482  400166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:27:30.586465  400166 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:27:30.598415  400166 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/391877.pem
	I1213 09:27:30.610812  400166 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/391877.pem /etc/ssl/certs/391877.pem
	I1213 09:27:30.623291  400166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391877.pem
	I1213 09:27:30.628841  400166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 09:23 /usr/share/ca-certificates/391877.pem
	I1213 09:27:30.628898  400166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391877.pem
	I1213 09:27:30.636824  400166 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:27:30.648894  400166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:27:30.654220  400166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 09:27:30.661748  400166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 09:27:30.668972  400166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 09:27:30.676373  400166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 09:27:30.683574  400166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 09:27:30.691120  400166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 09:27:30.699035  400166 kubeadm.go:401] StartCluster: {Name:functional-553391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0-beta.0 ClusterName:functional-553391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountSt
ring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:27:30.699121  400166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 09:27:30.699188  400166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:27:30.738856  400166 cri.go:89] found id: "74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9"
	I1213 09:27:30.738871  400166 cri.go:89] found id: "b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421"
	I1213 09:27:30.738873  400166 cri.go:89] found id: "abef72de0b38ca6ec98975d20a6d31b464fd8ce72c3f85bebc27de9ee873efce"
	I1213 09:27:30.738876  400166 cri.go:89] found id: "15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad"
	I1213 09:27:30.738878  400166 cri.go:89] found id: "43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb"
	I1213 09:27:30.738880  400166 cri.go:89] found id: "bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695"
	I1213 09:27:30.738882  400166 cri.go:89] found id: "a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c"
	I1213 09:27:30.738883  400166 cri.go:89] found id: ""
	I1213 09:27:30.738935  400166 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-553391 -n functional-553391
helpers_test.go:270: (dbg) Run:  kubectl --context functional-553391 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (1.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (302.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-553391 --alsologtostderr -v=1]
E1213 09:37:37.814098  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-553391 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-553391 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-553391 --alsologtostderr -v=1] stderr:
I1213 09:35:54.276733  403311 out.go:360] Setting OutFile to fd 1 ...
I1213 09:35:54.276838  403311 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:35:54.276847  403311 out.go:374] Setting ErrFile to fd 2...
I1213 09:35:54.276851  403311 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:35:54.277051  403311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
I1213 09:35:54.277307  403311 mustload.go:66] Loading cluster: functional-553391
I1213 09:35:54.277683  403311 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 09:35:54.279694  403311 host.go:66] Checking if "functional-553391" exists ...
I1213 09:35:54.279919  403311 api_server.go:166] Checking apiserver status ...
I1213 09:35:54.279969  403311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1213 09:35:54.282156  403311 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
I1213 09:35:54.282694  403311 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
I1213 09:35:54.282725  403311 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
I1213 09:35:54.282864  403311 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/functional-553391/id_rsa Username:docker}
I1213 09:35:54.371739  403311 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6376/cgroup
W1213 09:35:54.387270  403311 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6376/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1213 09:35:54.387397  403311 ssh_runner.go:195] Run: ls
I1213 09:35:54.392514  403311 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8441/healthz ...
I1213 09:35:54.398960  403311 api_server.go:279] https://192.168.39.38:8441/healthz returned 200:
ok
W1213 09:35:54.399015  403311 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1213 09:35:54.399181  403311 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 09:35:54.399197  403311 addons.go:70] Setting dashboard=true in profile "functional-553391"
I1213 09:35:54.399206  403311 addons.go:239] Setting addon dashboard=true in "functional-553391"
I1213 09:35:54.399236  403311 host.go:66] Checking if "functional-553391" exists ...
I1213 09:35:54.402878  403311 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1213 09:35:54.404094  403311 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1213 09:35:54.405279  403311 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1213 09:35:54.405293  403311 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1213 09:35:54.407896  403311 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
I1213 09:35:54.408304  403311 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
I1213 09:35:54.408350  403311 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
I1213 09:35:54.408528  403311 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/functional-553391/id_rsa Username:docker}
I1213 09:35:54.501811  403311 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1213 09:35:54.501839  403311 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1213 09:35:54.522730  403311 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1213 09:35:54.522761  403311 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1213 09:35:54.544578  403311 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1213 09:35:54.544608  403311 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1213 09:35:54.567236  403311 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1213 09:35:54.567261  403311 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1213 09:35:54.588451  403311 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1213 09:35:54.588482  403311 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1213 09:35:54.608670  403311 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1213 09:35:54.608699  403311 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1213 09:35:54.629552  403311 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1213 09:35:54.629587  403311 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1213 09:35:54.651381  403311 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1213 09:35:54.651408  403311 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1213 09:35:54.672855  403311 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1213 09:35:54.672891  403311 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1213 09:35:54.695922  403311 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1213 09:35:55.365767  403311 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-553391 addons enable metrics-server

                                                
                                                
I1213 09:35:55.367240  403311 addons.go:202] Writing out "functional-553391" config to set dashboard=true...
W1213 09:35:55.367663  403311 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1213 09:35:55.368599  403311 kapi.go:59] client config for functional-553391: &rest.Config{Host:"https://192.168.39.38:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.key", CAFile:"/home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1213 09:35:55.369182  403311 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1213 09:35:55.369250  403311 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1213 09:35:55.369261  403311 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1213 09:35:55.369265  403311 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1213 09:35:55.369268  403311 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1213 09:35:55.379986  403311 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  21396f19-9298-4054-8fc4-84f8ddf206d0 1165 0 2025-12-13 09:35:55 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-13 09:35:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.109.202.53,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.109.202.53],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1213 09:35:55.380191  403311 out.go:285] * Launching proxy ...
* Launching proxy ...
I1213 09:35:55.380287  403311 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-553391 proxy --port 36195]
I1213 09:35:55.380764  403311 dashboard.go:159] Waiting for kubectl to output host:port ...
I1213 09:35:55.427536  403311 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1213 09:35:55.427605  403311 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1213 09:35:55.438675  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[689807c0-237c-4378-a7ac-e05fe2ec9876] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:55 GMT]] Body:0xc001652300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208a00 TLS:<nil>}
I1213 09:35:55.438758  403311 retry.go:31] will retry after 84.977µs: Temporary Error: unexpected response code: 503
I1213 09:35:55.444065  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aca12453-d4e9-4b85-84c1-c4b479aecae8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:55 GMT]] Body:0xc0014f9b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00032cb40 TLS:<nil>}
I1213 09:35:55.444149  403311 retry.go:31] will retry after 221.902µs: Temporary Error: unexpected response code: 503
I1213 09:35:55.448231  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[381e4176-cd1b-472a-84ec-1e5afb2bbc64] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:55 GMT]] Body:0xc00080ee40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000047540 TLS:<nil>}
I1213 09:35:55.448342  403311 retry.go:31] will retry after 297.481µs: Temporary Error: unexpected response code: 503
I1213 09:35:55.451955  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[49425d51-c00e-468f-9eb5-4a17919d7609] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:55 GMT]] Body:0xc0014f9c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208dc0 TLS:<nil>}
I1213 09:35:55.452018  403311 retry.go:31] will retry after 416.334µs: Temporary Error: unexpected response code: 503
I1213 09:35:55.455727  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c37ad829-7efe-4ecd-aa85-3c9544535e54] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:55 GMT]] Body:0xc00080ef40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0000477c0 TLS:<nil>}
I1213 09:35:55.455789  403311 retry.go:31] will retry after 577.953µs: Temporary Error: unexpected response code: 503
I1213 09:35:55.459875  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1a7ea87e-8896-42f7-a6b0-ab7fb0fb2374] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:55 GMT]] Body:0xc0014f9d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208f00 TLS:<nil>}
I1213 09:35:55.459950  403311 retry.go:31] will retry after 745.652µs: Temporary Error: unexpected response code: 503
I1213 09:35:55.463660  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e30c0743-b5ff-420e-9e5b-2df8b3b62841] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:55 GMT]] Body:0xc001652400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000047900 TLS:<nil>}
I1213 09:35:55.463728  403311 retry.go:31] will retry after 1.132746ms: Temporary Error: unexpected response code: 503
I1213 09:35:55.468557  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aedb9aa4-7ad3-41b4-b426-c319aa6f17e7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:55 GMT]] Body:0xc0014f9e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00032cf00 TLS:<nil>}
I1213 09:35:55.468611  403311 retry.go:31] will retry after 1.391829ms: Temporary Error: unexpected response code: 503
I1213 09:35:55.473444  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6e02cb49-e2ee-4019-929a-473a56b24f15] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:55 GMT]] Body:0xc001652500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000047a40 TLS:<nil>}
I1213 09:35:55.473506  403311 retry.go:31] will retry after 3.647353ms: Temporary Error: unexpected response code: 503
I1213 09:35:55.480602  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cd22b52b-5fd8-4e8f-bc92-9da987d2b597] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:55 GMT]] Body:0xc0014f9f40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00032d040 TLS:<nil>}
I1213 09:35:55.480663  403311 retry.go:31] will retry after 4.0618ms: Temporary Error: unexpected response code: 503
I1213 09:35:55.487144  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[eeb30a2a-17f3-43b6-8a7d-3ad1fd66664e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:55 GMT]] Body:0xc00080f080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000047b80 TLS:<nil>}
I1213 09:35:55.487200  403311 retry.go:31] will retry after 7.920184ms: Temporary Error: unexpected response code: 503
I1213 09:35:55.499059  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bb931da7-6673-4bc8-8a9c-fceb8ecb3d33] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:55 GMT]] Body:0xc0016dc0c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209180 TLS:<nil>}
I1213 09:35:55.499120  403311 retry.go:31] will retry after 4.896608ms: Temporary Error: unexpected response code: 503
I1213 09:35:55.506863  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ff2b392d-e1d6-483b-99f6-e4cc55c5c8c7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:55 GMT]] Body:0xc00080f180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000047cc0 TLS:<nil>}
I1213 09:35:55.506922  403311 retry.go:31] will retry after 9.78736ms: Temporary Error: unexpected response code: 503
I1213 09:35:55.520356  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[240f8dfd-ad84-4334-b758-dd8e4e335def] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:55 GMT]] Body:0xc0016dc180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209400 TLS:<nil>}
I1213 09:35:55.520430  403311 retry.go:31] will retry after 27.688809ms: Temporary Error: unexpected response code: 503
I1213 09:35:55.551658  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ea601b10-1de6-4491-b01e-44994f7d66d2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:55 GMT]] Body:0xc0016dc280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000047e00 TLS:<nil>}
I1213 09:35:55.551733  403311 retry.go:31] will retry after 23.509618ms: Temporary Error: unexpected response code: 503
I1213 09:35:55.579159  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[99543ef8-e8b1-4bc6-884a-faf3f946edd8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:55 GMT]] Body:0xc00080f280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003fc000 TLS:<nil>}
I1213 09:35:55.579228  403311 retry.go:31] will retry after 26.258232ms: Temporary Error: unexpected response code: 503
I1213 09:35:55.610162  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8093401e-ba18-4910-af0b-ef5417229c6d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:55 GMT]] Body:0xc0016dc380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209540 TLS:<nil>}
I1213 09:35:55.610233  403311 retry.go:31] will retry after 37.668586ms: Temporary Error: unexpected response code: 503
I1213 09:35:55.652310  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f95b703c-8e9d-41a1-ba67-7c4359b89a13] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:55 GMT]] Body:0xc001652640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003fc140 TLS:<nil>}
I1213 09:35:55.652477  403311 retry.go:31] will retry after 73.135651ms: Temporary Error: unexpected response code: 503
I1213 09:35:55.728845  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a1b6ef09-39da-4209-ae02-6733557e83aa] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:55 GMT]] Body:0xc00080f380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00032d180 TLS:<nil>}
I1213 09:35:55.728910  403311 retry.go:31] will retry after 199.782086ms: Temporary Error: unexpected response code: 503
I1213 09:35:55.932827  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fd9df841-9ed0-4536-ac28-87b9ca4fcefa] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:55 GMT]] Body:0xc00080f440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209680 TLS:<nil>}
I1213 09:35:55.932912  403311 retry.go:31] will retry after 183.238736ms: Temporary Error: unexpected response code: 503
I1213 09:35:56.119377  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[46c53af1-4b1f-4b8e-9ff4-7e0279d89f54] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:56 GMT]] Body:0xc001652700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002097c0 TLS:<nil>}
I1213 09:35:56.119467  403311 retry.go:31] will retry after 447.403781ms: Temporary Error: unexpected response code: 503
I1213 09:35:56.571167  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[01c7d56e-a648-4316-89f7-89ffa51a613a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:56 GMT]] Body:0xc0016dc500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00032d2c0 TLS:<nil>}
I1213 09:35:56.571252  403311 retry.go:31] will retry after 265.139213ms: Temporary Error: unexpected response code: 503
I1213 09:35:56.839977  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[040d4e0b-3206-4808-bd49-6f0e5bb6283e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:56 GMT]] Body:0xc0016dc5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003fc280 TLS:<nil>}
I1213 09:35:56.840079  403311 retry.go:31] will retry after 1.018389748s: Temporary Error: unexpected response code: 503
I1213 09:35:57.862239  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[418908cc-fc8b-4223-91a6-827f06490d5b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:57 GMT]] Body:0xc0016dc680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003fc3c0 TLS:<nil>}
I1213 09:35:57.862316  403311 retry.go:31] will retry after 1.235776714s: Temporary Error: unexpected response code: 503
I1213 09:35:59.101811  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d63311d5-35c0-4e81-8c61-784919b8e6ab] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:35:59 GMT]] Body:0xc001652800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003fc500 TLS:<nil>}
I1213 09:35:59.101884  403311 retry.go:31] will retry after 1.911253605s: Temporary Error: unexpected response code: 503
I1213 09:36:01.017211  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[468bfe2f-8b1e-4dd1-8a10-528e6ee77b33] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:36:01 GMT]] Body:0xc00080f5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003fc640 TLS:<nil>}
I1213 09:36:01.017297  403311 retry.go:31] will retry after 2.808961672s: Temporary Error: unexpected response code: 503
I1213 09:36:03.831649  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bfd5e7d5-9ded-40aa-b9b5-fddc7006a290] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:36:03 GMT]] Body:0xc0016dc840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209900 TLS:<nil>}
I1213 09:36:03.831720  403311 retry.go:31] will retry after 4.402133806s: Temporary Error: unexpected response code: 503
I1213 09:36:08.239987  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b57c576c-0e43-4e6d-b51e-c9b526f2513c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:36:08 GMT]] Body:0xc0016dc8c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209a40 TLS:<nil>}
I1213 09:36:08.240083  403311 retry.go:31] will retry after 7.612241322s: Temporary Error: unexpected response code: 503
I1213 09:36:15.856562  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9c286a70-d07e-4bb3-8d4c-6be859165492] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:36:15 GMT]] Body:0xc0016528c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209b80 TLS:<nil>}
I1213 09:36:15.856641  403311 retry.go:31] will retry after 4.769455004s: Temporary Error: unexpected response code: 503
I1213 09:36:20.629988  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a8db6c3c-afe0-4e46-b087-6effc1af4c88] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:36:20 GMT]] Body:0xc00080f740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00032d400 TLS:<nil>}
I1213 09:36:20.630083  403311 retry.go:31] will retry after 17.531849379s: Temporary Error: unexpected response code: 503
I1213 09:36:38.167417  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[58ff0bc9-8919-400a-96fb-819e5d7e89eb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:36:38 GMT]] Body:0xc00080f7c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003fc780 TLS:<nil>}
I1213 09:36:38.167512  403311 retry.go:31] will retry after 27.368690557s: Temporary Error: unexpected response code: 503
I1213 09:37:05.540515  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[320cbde1-b69e-47db-af59-7e3217683c87] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:37:05 GMT]] Body:0xc0016dca00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003fc8c0 TLS:<nil>}
I1213 09:37:05.540594  403311 retry.go:31] will retry after 23.67445258s: Temporary Error: unexpected response code: 503
I1213 09:37:29.221528  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3077a228-fc2c-4b70-8692-e5784268ed88] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:37:29 GMT]] Body:0xc0016529c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209cc0 TLS:<nil>}
I1213 09:37:29.221615  403311 retry.go:31] will retry after 29.52971922s: Temporary Error: unexpected response code: 503
I1213 09:37:58.759086  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e0a432fd-f239-4983-afc1-563ad21775dd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:37:58 GMT]] Body:0xc00080e040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00032c140 TLS:<nil>}
I1213 09:37:58.759166  403311 retry.go:31] will retry after 48.727939772s: Temporary Error: unexpected response code: 503
I1213 09:38:47.492144  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a1c93984-5b38-4bd9-a979-24ed42b8f04e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:38:47 GMT]] Body:0xc0009200c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208280 TLS:<nil>}
I1213 09:38:47.492232  403311 retry.go:31] will retry after 44.554496404s: Temporary Error: unexpected response code: 503
I1213 09:39:32.052552  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cd579e1e-389a-44f1-97fa-b759935cdb4e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:39:32 GMT]] Body:0xc00080e0c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002083c0 TLS:<nil>}
I1213 09:39:32.052637  403311 retry.go:31] will retry after 1m15.586116032s: Temporary Error: unexpected response code: 503
I1213 09:40:47.642106  403311 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[adc50219-c8c6-4835-a645-a92c107a910d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 13 Dec 2025 09:40:47 GMT]] Body:0xc000920080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209e00 TLS:<nil>}
I1213 09:40:47.642195  403311 retry.go:31] will retry after 44.01541258s: Temporary Error: unexpected response code: 503
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-553391 -n functional-553391
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-553391 logs -n 25: (1.314391368s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image     │ functional-553391 image ls                                                                                                                          │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:31 UTC │ 13 Dec 25 09:31 UTC │
	│ image     │ functional-553391 image save --daemon kicbase/echo-server:functional-553391 --alsologtostderr                                                       │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:31 UTC │ 13 Dec 25 09:31 UTC │
	│ ssh       │ functional-553391 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ ssh       │ functional-553391 ssh sudo umount -f /mount-9p                                                                                                      │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ ssh       │ functional-553391 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ mount     │ -p functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2088139448/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ ssh       │ functional-553391 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ ssh       │ functional-553391 ssh -- ls -la /mount-9p                                                                                                           │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ ssh       │ functional-553391 ssh sudo umount -f /mount-9p                                                                                                      │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ ssh       │ functional-553391 ssh findmnt -T /mount1                                                                                                            │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ mount     │ -p functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3184993173/001:/mount3 --alsologtostderr -v=1                │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ mount     │ -p functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3184993173/001:/mount2 --alsologtostderr -v=1                │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ mount     │ -p functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3184993173/001:/mount1 --alsologtostderr -v=1                │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ ssh       │ functional-553391 ssh findmnt -T /mount1                                                                                                            │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ ssh       │ functional-553391 ssh findmnt -T /mount2                                                                                                            │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ ssh       │ functional-553391 ssh findmnt -T /mount3                                                                                                            │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ mount     │ -p functional-553391 --kill=true                                                                                                                    │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ ssh       │ functional-553391 ssh echo hello                                                                                                                    │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ ssh       │ functional-553391 ssh cat /etc/hostname                                                                                                             │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ start     │ -p functional-553391 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ start     │ -p functional-553391 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                   │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ start     │ -p functional-553391 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-553391 --alsologtostderr -v=1                                                                                      │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ addons    │ functional-553391 addons list                                                                                                                       │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:38 UTC │ 13 Dec 25 09:38 UTC │
	│ addons    │ functional-553391 addons list -o json                                                                                                               │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:38 UTC │ 13 Dec 25 09:38 UTC │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:35:54
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:35:54.159923  403295 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:35:54.160027  403295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:35:54.160032  403295 out.go:374] Setting ErrFile to fd 2...
	I1213 09:35:54.160036  403295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:35:54.160364  403295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 09:35:54.160842  403295 out.go:368] Setting JSON to false
	I1213 09:35:54.161750  403295 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4703,"bootTime":1765613851,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:35:54.161818  403295 start.go:143] virtualization: kvm guest
	I1213 09:35:54.163745  403295 out.go:179] * [functional-553391] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1213 09:35:54.165254  403295 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 09:35:54.165269  403295 notify.go:221] Checking for updates...
	I1213 09:35:54.167675  403295 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:35:54.168945  403295 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 09:35:54.170341  403295 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	I1213 09:35:54.171825  403295 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:35:54.173115  403295 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:35:54.174891  403295 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:35:54.175647  403295 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:35:54.206662  403295 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1213 09:35:54.207891  403295 start.go:309] selected driver: kvm2
	I1213 09:35:54.207911  403295 start.go:927] validating driver "kvm2" against &{Name:functional-553391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-553391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:35:54.208021  403295 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:35:54.210175  403295 out.go:203] 
	W1213 09:35:54.211537  403295 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 09:35:54.212968  403295 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 09:40:54 functional-553391 crio[5772]: time="2025-12-13 09:40:54.971688164Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765618854971645475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:164172,},InodesUsed:&UInt64Value{Value:73,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1db0976-a234-44dc-9a24-fd158419e69a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:40:54 functional-553391 crio[5772]: time="2025-12-13 09:40:54.972651901Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca91b050-49c5-4045-bee3-6686b27f09e9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:40:54 functional-553391 crio[5772]: time="2025-12-13 09:40:54.972719619Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca91b050-49c5-4045-bee3-6686b27f09e9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:40:54 functional-553391 crio[5772]: time="2025-12-13 09:40:54.973010270Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a033e03e6998717fd54c7a982e213390e2a93a581ab7db6633f97e03610bcce,PodSandboxId:1c66dfad4cada9aca6045bea029b2985bbb8d379f23e2de4ff19f786a4d9e0a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765618055477026408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f8e40cbfe4fe5278f2c2ebdcd0a07ddc2799d2d471fefccfdb255eba910739,PodSandboxId:981ef7045b5a7ba2a152b10d09e8904c2083766c230c9d447a92f2977a4b7b40,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765618055790289701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6d3aa793a5d4bdfe5b2b3732af0374d4cddf7fb3eedc48b5fe0e21ee321de3,PodSandboxId:45411b104740ddd5117e91200d1a690e438ae7a32255f1cb171b6bbbdfbcd892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765618055498638556,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2237eb3cfb9427342a03e244cb23da2440800776c88d39def517cae2df0c9081,PodSandboxId:7e0c763b37cf6447b07c9f367a9d448f4b876618c1146aeac6cd0602ada58d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765618052710769135,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136e051ce4b87ff42f1c7596ef9ad758,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a44d955020176708ebbad74e424fc16d6e74e1600a4aa935cc2ad1fb033d01,PodSandboxId:79e99323f18bbf1152b9765898d58f53eb3c609f3fc414b0680b3cd72d5fb60d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONT
AINER_RUNNING,CreatedAt:1765618052613820224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47ee0aabb1efdd09eec154c30881929f05d58a9520e541025def583ab954d02,PodSandboxId:748de94d0a3965d93b15772c21ed0a143d1dc80ebe6b1cc7b52869edf3fc6961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765618052599039262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9,PodSandboxId:5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765617926050470093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421,PodSandboxId:e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765617926038294787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb,PodSandboxId:235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765617900109965284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad,PodSandboxId:ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765617900149382376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 623733f12fc7a2bd3df192b3433220d0,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695,PodSandboxId:f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765617900061465369,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198
bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c,PodSandboxId:3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765617899883988957,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca91b050-49c5-4045-bee3-6686b27f09e9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:40:55 functional-553391 crio[5772]: time="2025-12-13 09:40:55.013560207Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e794c19b-904b-4ebb-bee1-10037a8397ab name=/runtime.v1.RuntimeService/Version
	Dec 13 09:40:55 functional-553391 crio[5772]: time="2025-12-13 09:40:55.013650355Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e794c19b-904b-4ebb-bee1-10037a8397ab name=/runtime.v1.RuntimeService/Version
	Dec 13 09:40:55 functional-553391 crio[5772]: time="2025-12-13 09:40:55.015125393Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe495361-ba46-47c4-889b-d81ff71e8add name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:40:55 functional-553391 crio[5772]: time="2025-12-13 09:40:55.016217995Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765618855016189885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:164172,},InodesUsed:&UInt64Value{Value:73,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe495361-ba46-47c4-889b-d81ff71e8add name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:40:55 functional-553391 crio[5772]: time="2025-12-13 09:40:55.017530024Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28dea047-275c-48e6-a16d-8dab709811de name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:40:55 functional-553391 crio[5772]: time="2025-12-13 09:40:55.017634548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28dea047-275c-48e6-a16d-8dab709811de name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:40:55 functional-553391 crio[5772]: time="2025-12-13 09:40:55.017939261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a033e03e6998717fd54c7a982e213390e2a93a581ab7db6633f97e03610bcce,PodSandboxId:1c66dfad4cada9aca6045bea029b2985bbb8d379f23e2de4ff19f786a4d9e0a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765618055477026408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f8e40cbfe4fe5278f2c2ebdcd0a07ddc2799d2d471fefccfdb255eba910739,PodSandboxId:981ef7045b5a7ba2a152b10d09e8904c2083766c230c9d447a92f2977a4b7b40,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765618055790289701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6d3aa793a5d4bdfe5b2b3732af0374d4cddf7fb3eedc48b5fe0e21ee321de3,PodSandboxId:45411b104740ddd5117e91200d1a690e438ae7a32255f1cb171b6bbbdfbcd892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765618055498638556,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2237eb3cfb9427342a03e244cb23da2440800776c88d39def517cae2df0c9081,PodSandboxId:7e0c763b37cf6447b07c9f367a9d448f4b876618c1146aeac6cd0602ada58d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765618052710769135,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136e051ce4b87ff42f1c7596ef9ad758,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a44d955020176708ebbad74e424fc16d6e74e1600a4aa935cc2ad1fb033d01,PodSandboxId:79e99323f18bbf1152b9765898d58f53eb3c609f3fc414b0680b3cd72d5fb60d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONT
AINER_RUNNING,CreatedAt:1765618052613820224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47ee0aabb1efdd09eec154c30881929f05d58a9520e541025def583ab954d02,PodSandboxId:748de94d0a3965d93b15772c21ed0a143d1dc80ebe6b1cc7b52869edf3fc6961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765618052599039262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9,PodSandboxId:5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765617926050470093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421,PodSandboxId:e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765617926038294787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb,PodSandboxId:235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765617900109965284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad,PodSandboxId:ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765617900149382376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 623733f12fc7a2bd3df192b3433220d0,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695,PodSandboxId:f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765617900061465369,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198
bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c,PodSandboxId:3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765617899883988957,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28dea047-275c-48e6-a16d-8dab709811de name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:40:55 functional-553391 crio[5772]: time="2025-12-13 09:40:55.050946956Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a3028b3-7692-4d3b-85e5-8c372ef04fdd name=/runtime.v1.RuntimeService/Version
	Dec 13 09:40:55 functional-553391 crio[5772]: time="2025-12-13 09:40:55.051125779Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a3028b3-7692-4d3b-85e5-8c372ef04fdd name=/runtime.v1.RuntimeService/Version
	Dec 13 09:40:55 functional-553391 crio[5772]: time="2025-12-13 09:40:55.052381178Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=56a182c3-66fd-475b-9ae1-e023cee52862 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:40:55 functional-553391 crio[5772]: time="2025-12-13 09:40:55.052952585Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765618855052928242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:164172,},InodesUsed:&UInt64Value{Value:73,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56a182c3-66fd-475b-9ae1-e023cee52862 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:40:55 functional-553391 crio[5772]: time="2025-12-13 09:40:55.054092711Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=774125c7-e924-4fe8-b1b8-4d3c82c1c8a2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:40:55 functional-553391 crio[5772]: time="2025-12-13 09:40:55.054293485Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=774125c7-e924-4fe8-b1b8-4d3c82c1c8a2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:40:55 functional-553391 crio[5772]: time="2025-12-13 09:40:55.054758209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a033e03e6998717fd54c7a982e213390e2a93a581ab7db6633f97e03610bcce,PodSandboxId:1c66dfad4cada9aca6045bea029b2985bbb8d379f23e2de4ff19f786a4d9e0a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765618055477026408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f8e40cbfe4fe5278f2c2ebdcd0a07ddc2799d2d471fefccfdb255eba910739,PodSandboxId:981ef7045b5a7ba2a152b10d09e8904c2083766c230c9d447a92f2977a4b7b40,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765618055790289701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6d3aa793a5d4bdfe5b2b3732af0374d4cddf7fb3eedc48b5fe0e21ee321de3,PodSandboxId:45411b104740ddd5117e91200d1a690e438ae7a32255f1cb171b6bbbdfbcd892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765618055498638556,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2237eb3cfb9427342a03e244cb23da2440800776c88d39def517cae2df0c9081,PodSandboxId:7e0c763b37cf6447b07c9f367a9d448f4b876618c1146aeac6cd0602ada58d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765618052710769135,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136e051ce4b87ff42f1c7596ef9ad758,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a44d955020176708ebbad74e424fc16d6e74e1600a4aa935cc2ad1fb033d01,PodSandboxId:79e99323f18bbf1152b9765898d58f53eb3c609f3fc414b0680b3cd72d5fb60d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONT
AINER_RUNNING,CreatedAt:1765618052613820224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47ee0aabb1efdd09eec154c30881929f05d58a9520e541025def583ab954d02,PodSandboxId:748de94d0a3965d93b15772c21ed0a143d1dc80ebe6b1cc7b52869edf3fc6961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765618052599039262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9,PodSandboxId:5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765617926050470093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421,PodSandboxId:e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765617926038294787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb,PodSandboxId:235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765617900109965284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad,PodSandboxId:ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765617900149382376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 623733f12fc7a2bd3df192b3433220d0,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695,PodSandboxId:f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765617900061465369,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198
bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c,PodSandboxId:3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765617899883988957,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=774125c7-e924-4fe8-b1b8-4d3c82c1c8a2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:40:55 functional-553391 crio[5772]: time="2025-12-13 09:40:55.086462137Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=95f7f7a9-4f8a-48c0-920d-d53e2d3a1cb7 name=/runtime.v1.RuntimeService/Version
	Dec 13 09:40:55 functional-553391 crio[5772]: time="2025-12-13 09:40:55.086590140Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=95f7f7a9-4f8a-48c0-920d-d53e2d3a1cb7 name=/runtime.v1.RuntimeService/Version
	Dec 13 09:40:55 functional-553391 crio[5772]: time="2025-12-13 09:40:55.087685805Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb4173da-4486-41a3-9e5e-c68e9f00c774 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:40:55 functional-553391 crio[5772]: time="2025-12-13 09:40:55.088280328Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765618855088255598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:164172,},InodesUsed:&UInt64Value{Value:73,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb4173da-4486-41a3-9e5e-c68e9f00c774 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:40:55 functional-553391 crio[5772]: time="2025-12-13 09:40:55.089524498Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc62562f-c51c-4c83-ab0d-73469b7deec5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:40:55 functional-553391 crio[5772]: time="2025-12-13 09:40:55.089759944Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc62562f-c51c-4c83-ab0d-73469b7deec5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:40:55 functional-553391 crio[5772]: time="2025-12-13 09:40:55.090299649Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a033e03e6998717fd54c7a982e213390e2a93a581ab7db6633f97e03610bcce,PodSandboxId:1c66dfad4cada9aca6045bea029b2985bbb8d379f23e2de4ff19f786a4d9e0a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765618055477026408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f8e40cbfe4fe5278f2c2ebdcd0a07ddc2799d2d471fefccfdb255eba910739,PodSandboxId:981ef7045b5a7ba2a152b10d09e8904c2083766c230c9d447a92f2977a4b7b40,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765618055790289701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6d3aa793a5d4bdfe5b2b3732af0374d4cddf7fb3eedc48b5fe0e21ee321de3,PodSandboxId:45411b104740ddd5117e91200d1a690e438ae7a32255f1cb171b6bbbdfbcd892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765618055498638556,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2237eb3cfb9427342a03e244cb23da2440800776c88d39def517cae2df0c9081,PodSandboxId:7e0c763b37cf6447b07c9f367a9d448f4b876618c1146aeac6cd0602ada58d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765618052710769135,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136e051ce4b87ff42f1c7596ef9ad758,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a44d955020176708ebbad74e424fc16d6e74e1600a4aa935cc2ad1fb033d01,PodSandboxId:79e99323f18bbf1152b9765898d58f53eb3c609f3fc414b0680b3cd72d5fb60d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONT
AINER_RUNNING,CreatedAt:1765618052613820224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47ee0aabb1efdd09eec154c30881929f05d58a9520e541025def583ab954d02,PodSandboxId:748de94d0a3965d93b15772c21ed0a143d1dc80ebe6b1cc7b52869edf3fc6961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765618052599039262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9,PodSandboxId:5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765617926050470093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421,PodSandboxId:e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765617926038294787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb,PodSandboxId:235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765617900109965284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad,PodSandboxId:ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765617900149382376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 623733f12fc7a2bd3df192b3433220d0,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695,PodSandboxId:f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765617900061465369,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198
bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c,PodSandboxId:3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765617899883988957,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc62562f-c51c-4c83-ab0d-73469b7deec5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f0f8e40cbfe4f       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139   13 minutes ago      Running             coredns                   2                   981ef7045b5a7       coredns-7d764666f9-rjg8z                    kube-system
	4b6d3aa793a5d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       3                   45411b104740d       storage-provisioner                         kube-system
	4a033e03e6998       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   13 minutes ago      Running             kube-proxy                2                   1c66dfad4cada       kube-proxy-nmxbh                            kube-system
	2237eb3cfb942       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   13 minutes ago      Running             kube-apiserver            0                   7e0c763b37cf6       kube-apiserver-functional-553391            kube-system
	17a44d9550201       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   13 minutes ago      Running             etcd                      2                   79e99323f18bb       etcd-functional-553391                      kube-system
	c47ee0aabb1ef       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   13 minutes ago      Running             kube-controller-manager   2                   748de94d0a396       kube-controller-manager-functional-553391   kube-system
	74dbec4b8aef1       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139   15 minutes ago      Exited              coredns                   1                   5e9144b389dce       coredns-7d764666f9-rjg8z                    kube-system
	b9244bb17b848       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Exited              storage-provisioner       2                   e6342f727896e       storage-provisioner                         kube-system
	15ccf55277802       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   15 minutes ago      Exited              kube-scheduler            1                   ad35e4e969107       kube-scheduler-functional-553391            kube-system
	43e2ebf7101ff       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   15 minutes ago      Exited              etcd                      1                   235bdf467969a       etcd-functional-553391                      kube-system
	bef6c74863ea7       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   15 minutes ago      Exited              kube-controller-manager   1                   f1efa84ba4774       kube-controller-manager-functional-553391   kube-system
	a7de46befbc34       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   15 minutes ago      Exited              kube-proxy                1                   3c085dc8222fb       kube-proxy-nmxbh                            kube-system
	
	
	==> coredns [74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:38849 - 10587 "HINFO IN 1179697731504859025.6635090832342881038. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0581936s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f0f8e40cbfe4fe5278f2c2ebdcd0a07ddc2799d2d471fefccfdb255eba910739] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:34740 - 64681 "HINFO IN 9182635211618943717.7247219486871022041. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.096462358s
	
	
	==> describe nodes <==
	Name:               functional-553391
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-553391
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
	                    minikube.k8s.io/name=functional-553391
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_24_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:23:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-553391
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:40:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:38:58 +0000   Sat, 13 Dec 2025 09:23:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:38:58 +0000   Sat, 13 Dec 2025 09:23:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:38:58 +0000   Sat, 13 Dec 2025 09:23:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:38:58 +0000   Sat, 13 Dec 2025 09:24:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    functional-553391
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 fdf3a8f76472433fb21c3307ef40831b
	  System UUID:                fdf3a8f7-6472-433f-b21c-3307ef40831b
	  Boot ID:                    3c8d40c0-0e2d-4a05-9897-d24bc6cacbb9
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-rjg8z                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     16m
	  kube-system                 etcd-functional-553391                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         16m
	  kube-system                 kube-apiserver-functional-553391             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-functional-553391    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-nmxbh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-functional-553391             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  16m   node-controller  Node functional-553391 event: Registered Node functional-553391 in Controller
	  Normal  RegisteredNode  15m   node-controller  Node functional-553391 event: Registered Node functional-553391 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node functional-553391 event: Registered Node functional-553391 in Controller
	
	
	==> dmesg <==
	[Dec13 09:23] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000049] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004691] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.166245] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084827] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.098140] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.130337] kauditd_printk_skb: 171 callbacks suppressed
	[Dec13 09:24] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.600381] kauditd_printk_skb: 248 callbacks suppressed
	[ +35.932646] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 09:25] kauditd_printk_skb: 356 callbacks suppressed
	[  +1.617310] kauditd_printk_skb: 42 callbacks suppressed
	[ +12.973574] kauditd_printk_skb: 12 callbacks suppressed
	[Dec13 09:27] kauditd_printk_skb: 209 callbacks suppressed
	[  +3.605359] kauditd_printk_skb: 153 callbacks suppressed
	[  +6.110507] kauditd_printk_skb: 133 callbacks suppressed
	[Dec13 09:31] kauditd_printk_skb: 36 callbacks suppressed
	[Dec13 09:35] kauditd_printk_skb: 2 callbacks suppressed
	[Dec13 09:38] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [17a44d955020176708ebbad74e424fc16d6e74e1600a4aa935cc2ad1fb033d01] <==
	{"level":"warn","ts":"2025-12-13T09:27:33.755682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.771997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.780019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.785864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.793594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.801980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.809713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.823612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.829416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.840008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.850256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.858039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.864984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.872083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.891523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.905524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.916312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.922300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.931561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.941621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.945560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.993699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50688","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T09:37:33.367003Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":959}
	{"level":"info","ts":"2025-12-13T09:37:33.377296Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":959,"took":"9.242015ms","hash":2537724432,"current-db-size-bytes":2850816,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2850816,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2025-12-13T09:37:33.377568Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2537724432,"revision":959,"compact-revision":-1}
	
	
	==> etcd [43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb] <==
	{"level":"warn","ts":"2025-12-13T09:25:23.910219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:23.923853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:23.927780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:23.936775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:23.944573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:23.953555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:24.035457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36150","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T09:25:51.349706Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T09:25:51.349772Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-553391","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.38:2380"],"advertise-client-urls":["https://192.168.39.38:2379"]}
	{"level":"error","ts":"2025-12-13T09:25:51.357195Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T09:25:51.442258Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T09:25:51.443776Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:25:51.443862Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"38b26e584d45e0da","current-leader-member-id":"38b26e584d45e0da"}
	{"level":"warn","ts":"2025-12-13T09:25:51.443944Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T09:25:51.444031Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T09:25:51.444041Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:25:51.444058Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-13T09:25:51.444015Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-13T09:25:51.444108Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.38:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T09:25:51.444134Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.38:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T09:25:51.444142Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.38:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:25:51.447967Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.38:2380"}
	{"level":"error","ts":"2025-12-13T09:25:51.448071Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.38:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:25:51.448106Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.38:2380"}
	{"level":"info","ts":"2025-12-13T09:25:51.448114Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-553391","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.38:2380"],"advertise-client-urls":["https://192.168.39.38:2379"]}
	
	
	==> kernel <==
	 09:40:55 up 17 min,  0 users,  load average: 0.12, 0.17, 0.17
	Linux functional-553391 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [2237eb3cfb9427342a03e244cb23da2440800776c88d39def517cae2df0c9081] <==
	I1213 09:27:34.752018       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 09:27:34.752063       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1213 09:27:34.752152       1 aggregator.go:187] initial CRD sync complete...
	I1213 09:27:34.752161       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 09:27:34.752166       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 09:27:34.752170       1 cache.go:39] Caches are synced for autoregister controller
	I1213 09:27:34.755443       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 09:27:34.768964       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:27:34.989792       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 09:27:35.552170       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1213 09:27:36.683227       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:27:36.740625       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 09:27:36.773702       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:27:36.784227       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:27:38.099192       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:27:38.250187       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 09:27:38.348439       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:31:44.252988       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.31.242"}
	I1213 09:31:47.762685       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.66.94"}
	I1213 09:31:51.195438       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.98.33.51"}
	I1213 09:35:55.088227       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 09:35:55.327550       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.202.53"}
	I1213 09:35:55.349609       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.109.158"}
	I1213 09:37:34.662190       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:38:04.053710       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.57.7"}
	
	
	==> kube-controller-manager [bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695] <==
	I1213 09:25:27.972668       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.974789       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.972832       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.974966       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.975360       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.972929       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.977747       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.977858       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.977973       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.978099       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.978132       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.978207       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.978922       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.979032       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.979085       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.984323       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.984406       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.984450       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.984501       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.988096       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:28.008019       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:28.066414       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:28.084492       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:28.084512       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 09:25:28.084517       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-controller-manager [c47ee0aabb1efdd09eec154c30881929f05d58a9520e541025def583ab954d02] <==
	I1213 09:27:37.929063       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.959660       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.959721       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.959741       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.959800       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.960015       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.960080       1 range_allocator.go:177] "Sending events to api server"
	I1213 09:27:37.960131       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1213 09:27:37.960137       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:27:37.960141       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.959682       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.960314       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1213 09:27:37.960382       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-553391"
	I1213 09:27:37.960424       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1213 09:27:37.968967       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.971598       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.972793       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 09:27:37.972804       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1213 09:27:37.986319       1 shared_informer.go:377] "Caches are synced"
	E1213 09:35:55.174531       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:55.184739       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:55.198136       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:55.216383       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:55.221609       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:55.233963       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [4a033e03e6998717fd54c7a982e213390e2a93a581ab7db6633f97e03610bcce] <==
	I1213 09:27:36.194743       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:27:36.296159       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:36.299041       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.38"]
	E1213 09:27:36.299525       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:27:36.369439       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 09:27:36.369524       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 09:27:36.369545       1 server_linux.go:136] "Using iptables Proxier"
	I1213 09:27:36.381798       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:27:36.382110       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 09:27:36.382141       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:27:36.388769       1 config.go:200] "Starting service config controller"
	I1213 09:27:36.389028       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:27:36.389070       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:27:36.389076       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:27:36.389235       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:27:36.389535       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:27:36.390638       1 config.go:309] "Starting node config controller"
	I1213 09:27:36.390719       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:27:36.489796       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:27:36.489917       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:27:36.489987       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 09:27:36.491158       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c] <==
	I1213 09:25:00.988777       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:25:25.893454       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:25.893556       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.38"]
	E1213 09:25:25.893655       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:25:25.940800       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 09:25:25.940970       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 09:25:25.940997       1 server_linux.go:136] "Using iptables Proxier"
	I1213 09:25:25.950368       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:25:25.950781       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 09:25:25.950797       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:25:25.953179       1 config.go:200] "Starting service config controller"
	I1213 09:25:25.954715       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:25:25.953595       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:25:25.955069       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:25:25.953610       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:25:25.955255       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:25:25.959118       1 config.go:309] "Starting node config controller"
	I1213 09:25:25.959146       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:25:26.055988       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:25:26.056061       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:25:26.056143       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 09:25:26.060009       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad] <==
	E1213 09:25:24.739405       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1213 09:25:24.739461       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1213 09:25:24.739526       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1213 09:25:24.739572       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1213 09:25:24.739642       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1213 09:25:24.739677       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1213 09:25:24.739770       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1213 09:25:24.739826       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1213 09:25:24.740002       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1213 09:25:24.740105       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1213 09:25:24.740186       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1213 09:25:24.740250       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1213 09:25:24.740316       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1213 09:25:24.740394       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1213 09:25:24.740516       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1213 09:25:24.740571       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1213 09:25:24.740616       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1213 09:25:24.740653       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1213 09:25:27.991267       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:51.355821       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 09:25:51.365725       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 09:25:51.365754       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:25:51.368478       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 09:25:51.368492       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 09:25:51.368520       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 13 09:40:23 functional-553391 kubelet[6136]: E1213 09:40:23.994202    6136 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-553391" podUID="623733f12fc7a2bd3df192b3433220d0"
	Dec 13 09:40:31 functional-553391 kubelet[6136]: E1213 09:40:31.982568    6136 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-rjg8z" containerName="coredns"
	Dec 13 09:40:32 functional-553391 kubelet[6136]: E1213 09:40:32.065429    6136 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod623733f12fc7a2bd3df192b3433220d0/crio-ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563: Error finding container ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563: Status 404 returned error can't find the container with id ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563
	Dec 13 09:40:32 functional-553391 kubelet[6136]: E1213 09:40:32.065692    6136 manager.go:1119] Failed to create existing container: /kubepods/burstable/poda819a5b5d8a1acac4ff9198bf329d816/crio-f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f: Error finding container f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f: Status 404 returned error can't find the container with id f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f
	Dec 13 09:40:32 functional-553391 kubelet[6136]: E1213 09:40:32.066079    6136 manager.go:1119] Failed to create existing container: /kubepods/besteffort/pod5b03d972-1560-487e-8c23-357ba0a288ce/crio-3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b: Error finding container 3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b: Status 404 returned error can't find the container with id 3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b
	Dec 13 09:40:32 functional-553391 kubelet[6136]: E1213 09:40:32.066318    6136 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod474b0e4e-417c-49da-b863-8950ea9eb75f/crio-5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a: Error finding container 5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a: Status 404 returned error can't find the container with id 5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a
	Dec 13 09:40:32 functional-553391 kubelet[6136]: E1213 09:40:32.066646    6136 manager.go:1119] Failed to create existing container: /kubepods/besteffort/pod4b1284e2-956a-4e4a-b504-57f20fa9a365/crio-e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9: Error finding container e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9: Status 404 returned error can't find the container with id e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9
	Dec 13 09:40:32 functional-553391 kubelet[6136]: E1213 09:40:32.067052    6136 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod9423f6b5da5b329cef63430d36acee6e/crio-235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63: Error finding container 235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63: Status 404 returned error can't find the container with id 235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63
	Dec 13 09:40:32 functional-553391 kubelet[6136]: E1213 09:40:32.266635    6136 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765618832265560730  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164172}  inodes_used:{value:73}}"
	Dec 13 09:40:32 functional-553391 kubelet[6136]: E1213 09:40:32.266674    6136 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765618832265560730  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164172}  inodes_used:{value:73}}"
	Dec 13 09:40:36 functional-553391 kubelet[6136]: E1213 09:40:36.981377    6136 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-553391" containerName="kube-controller-manager"
	Dec 13 09:40:37 functional-553391 kubelet[6136]: E1213 09:40:37.981800    6136 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-553391" containerName="kube-scheduler"
	Dec 13 09:40:37 functional-553391 kubelet[6136]: E1213 09:40:37.992256    6136 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists"
	Dec 13 09:40:37 functional-553391 kubelet[6136]: E1213 09:40:37.992301    6136 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists" pod="kube-system/kube-scheduler-functional-553391"
	Dec 13 09:40:37 functional-553391 kubelet[6136]: E1213 09:40:37.992316    6136 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists" pod="kube-system/kube-scheduler-functional-553391"
	Dec 13 09:40:37 functional-553391 kubelet[6136]: E1213 09:40:37.992382    6136 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-553391" podUID="623733f12fc7a2bd3df192b3433220d0"
	Dec 13 09:40:42 functional-553391 kubelet[6136]: E1213 09:40:42.269706    6136 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765618842269151328  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164172}  inodes_used:{value:73}}"
	Dec 13 09:40:42 functional-553391 kubelet[6136]: E1213 09:40:42.269727    6136 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765618842269151328  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164172}  inodes_used:{value:73}}"
	Dec 13 09:40:49 functional-553391 kubelet[6136]: E1213 09:40:49.982807    6136 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-553391" containerName="kube-scheduler"
	Dec 13 09:40:49 functional-553391 kubelet[6136]: E1213 09:40:49.999337    6136 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists"
	Dec 13 09:40:49 functional-553391 kubelet[6136]: E1213 09:40:49.999400    6136 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists" pod="kube-system/kube-scheduler-functional-553391"
	Dec 13 09:40:49 functional-553391 kubelet[6136]: E1213 09:40:49.999416    6136 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists" pod="kube-system/kube-scheduler-functional-553391"
	Dec 13 09:40:49 functional-553391 kubelet[6136]: E1213 09:40:49.999500    6136 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-553391" podUID="623733f12fc7a2bd3df192b3433220d0"
	Dec 13 09:40:52 functional-553391 kubelet[6136]: E1213 09:40:52.271286    6136 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765618852270862879  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164172}  inodes_used:{value:73}}"
	Dec 13 09:40:52 functional-553391 kubelet[6136]: E1213 09:40:52.271322    6136 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765618852270862879  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164172}  inodes_used:{value:73}}"
	
	
	==> storage-provisioner [4b6d3aa793a5d4bdfe5b2b3732af0374d4cddf7fb3eedc48b5fe0e21ee321de3] <==
	W1213 09:40:31.499943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:33.503114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:33.512170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:35.515463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:35.520238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:37.523352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:37.532025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:39.535460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:39.542111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:41.545642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:41.551079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:43.554490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:43.559686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:45.563554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:45.570043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:47.574256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:47.584361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:49.587834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:49.593393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:51.596923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:51.606974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:53.609720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:53.614341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:55.619163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:40:55.630186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421] <==
	I1213 09:25:26.201262       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 09:25:26.213115       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 09:25:26.215708       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 09:25:26.220587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:29.676396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:33.936290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:37.534481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:40.587597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:43.611039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:43.616691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:25:43.616926       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 09:25:43.617034       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a807b17-b228-43c8-97ae-e7e16ec2cdf4", APIVersion:"v1", ResourceVersion:"532", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-553391_d215204c-1541-413a-b16f-a41e2460e6c8 became leader
	I1213 09:25:43.617264       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-553391_d215204c-1541-413a-b16f-a41e2460e6c8!
	W1213 09:25:43.620228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:43.629789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:25:43.717650       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-553391_d215204c-1541-413a-b16f-a41e2460e6c8!
	W1213 09:25:45.635212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:45.648126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:47.651491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:47.657022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:49.662658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:49.675580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-553391 -n functional-553391
helpers_test.go:270: (dbg) Run:  kubectl --context functional-553391 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-5758569b79-mc2sr hello-node-connect-9f67c86d4-5k96g mysql-7d7b65bc95-bmf88 sp-pod dashboard-metrics-scraper-5565989548-kmw97 kubernetes-dashboard-b84665fb8-fphhq
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-553391 describe pod busybox-mount hello-node-5758569b79-mc2sr hello-node-connect-9f67c86d4-5k96g mysql-7d7b65bc95-bmf88 sp-pod dashboard-metrics-scraper-5565989548-kmw97 kubernetes-dashboard-b84665fb8-fphhq
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-553391 describe pod busybox-mount hello-node-5758569b79-mc2sr hello-node-connect-9f67c86d4-5k96g mysql-7d7b65bc95-bmf88 sp-pod dashboard-metrics-scraper-5565989548-kmw97 kubernetes-dashboard-b84665fb8-fphhq: exit status 1 (95.987659ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    Environment:  <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-42dlg (ro)
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-42dlg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-5758569b79-mc2sr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8f569 (ro)
	Volumes:
	  kube-api-access-8f569:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-connect-9f67c86d4-5k96g
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lv7bb (ro)
	Volumes:
	  kube-api-access-lv7bb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             mysql-7d7b65bc95-bmf88
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=mysql
	                  pod-template-hash=7d7b65bc95
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-7d7b65bc95
	Containers:
	  mysql:
	    Image:      public.ecr.aws/docker/library/mysql:8.4
	    Port:       3306/TCP (mysql)
	    Host Port:  0/TCP (mysql)
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q89gb (ro)
	Volumes:
	  kube-api-access-q89gb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Image:        public.ecr.aws/nginx/nginx:alpine
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p9sfg (ro)
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-p9sfg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-kmw97" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-fphhq" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-553391 describe pod busybox-mount hello-node-5758569b79-mc2sr hello-node-connect-9f67c86d4-5k96g mysql-7d7b65bc95-bmf88 sp-pod dashboard-metrics-scraper-5565989548-kmw97 kubernetes-dashboard-b84665fb8-fphhq: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (302.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (602.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-553391 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-553391 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-5k96g" [2936a5d1-85b8-468d-8d1d-78150de5b2fe] Pending
E1213 09:38:56.551133  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-553391 -n functional-553391
functional_test.go:1645: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-13 09:48:04.304976369 +0000 UTC m=+2195.032936969
functional_test.go:1645: (dbg) Run:  kubectl --context functional-553391 describe po hello-node-connect-9f67c86d4-5k96g -n default
functional_test.go:1645: (dbg) kubectl --context functional-553391 describe po hello-node-connect-9f67c86d4-5k96g -n default:
Name:             hello-node-connect-9f67c86d4-5k96g
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Image:        kicbase/echo-server
Port:         <none>
Host Port:    <none>
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lv7bb (ro)
Volumes:
kube-api-access-lv7bb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
functional_test.go:1645: (dbg) Run:  kubectl --context functional-553391 logs hello-node-connect-9f67c86d4-5k96g -n default
functional_test.go:1645: (dbg) kubectl --context functional-553391 logs hello-node-connect-9f67c86d4-5k96g -n default:
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-553391 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-9f67c86d4-5k96g
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Image:        kicbase/echo-server
Port:         <none>
Host Port:    <none>
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lv7bb (ro)
Volumes:
kube-api-access-lv7bb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-553391 logs -l app=hello-node-connect
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-553391 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.57.7
IPs:                      10.110.57.7
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30925/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-553391 -n functional-553391
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-553391 logs -n 25: (1.231871367s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                    ARGS                                                                     │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-553391 ssh findmnt -T /mount3                                                                                                    │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ mount          │ -p functional-553391 --kill=true                                                                                                            │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ ssh            │ functional-553391 ssh echo hello                                                                                                            │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ ssh            │ functional-553391 ssh cat /etc/hostname                                                                                                     │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ start          │ -p functional-553391 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ start          │ -p functional-553391 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0           │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ start          │ -p functional-553391 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-553391 --alsologtostderr -v=1                                                                              │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ addons         │ functional-553391 addons list                                                                                                               │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:38 UTC │ 13 Dec 25 09:38 UTC │
	│ addons         │ functional-553391 addons list -o json                                                                                                       │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:38 UTC │ 13 Dec 25 09:38 UTC │
	│ update-context │ functional-553391 update-context --alsologtostderr -v=2                                                                                     │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:40 UTC │ 13 Dec 25 09:40 UTC │
	│ update-context │ functional-553391 update-context --alsologtostderr -v=2                                                                                     │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:40 UTC │ 13 Dec 25 09:40 UTC │
	│ update-context │ functional-553391 update-context --alsologtostderr -v=2                                                                                     │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:40 UTC │ 13 Dec 25 09:40 UTC │
	│ image          │ functional-553391 image ls --format short --alsologtostderr                                                                                 │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:40 UTC │ 13 Dec 25 09:40 UTC │
	│ image          │ functional-553391 image ls --format yaml --alsologtostderr                                                                                  │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:40 UTC │ 13 Dec 25 09:40 UTC │
	│ ssh            │ functional-553391 ssh pgrep buildkitd                                                                                                       │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:40 UTC │                     │
	│ image          │ functional-553391 image build -t localhost/my-image:functional-553391 testdata/build --alsologtostderr                                      │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:40 UTC │ 13 Dec 25 09:41 UTC │
	│ image          │ functional-553391 image ls                                                                                                                  │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:41 UTC │ 13 Dec 25 09:41 UTC │
	│ image          │ functional-553391 image ls --format json --alsologtostderr                                                                                  │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:41 UTC │ 13 Dec 25 09:41 UTC │
	│ image          │ functional-553391 image ls --format table --alsologtostderr                                                                                 │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:41 UTC │ 13 Dec 25 09:41 UTC │
	│ service        │ functional-553391 service list                                                                                                              │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:41 UTC │ 13 Dec 25 09:41 UTC │
	│ service        │ functional-553391 service list -o json                                                                                                      │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:41 UTC │ 13 Dec 25 09:41 UTC │
	│ service        │ functional-553391 service --namespace=default --https --url hello-node                                                                      │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:41 UTC │                     │
	│ service        │ functional-553391 service hello-node --url --format={{.IP}}                                                                                 │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:41 UTC │                     │
	│ service        │ functional-553391 service hello-node --url                                                                                                  │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:41 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:35:54
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:35:54.159923  403295 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:35:54.160027  403295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:35:54.160032  403295 out.go:374] Setting ErrFile to fd 2...
	I1213 09:35:54.160036  403295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:35:54.160364  403295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 09:35:54.160842  403295 out.go:368] Setting JSON to false
	I1213 09:35:54.161750  403295 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4703,"bootTime":1765613851,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:35:54.161818  403295 start.go:143] virtualization: kvm guest
	I1213 09:35:54.163745  403295 out.go:179] * [functional-553391] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1213 09:35:54.165254  403295 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 09:35:54.165269  403295 notify.go:221] Checking for updates...
	I1213 09:35:54.167675  403295 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:35:54.168945  403295 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 09:35:54.170341  403295 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	I1213 09:35:54.171825  403295 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:35:54.173115  403295 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:35:54.174891  403295 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:35:54.175647  403295 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:35:54.206662  403295 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1213 09:35:54.207891  403295 start.go:309] selected driver: kvm2
	I1213 09:35:54.207911  403295 start.go:927] validating driver "kvm2" against &{Name:functional-553391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-553391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:35:54.208021  403295 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:35:54.210175  403295 out.go:203] 
	W1213 09:35:54.211537  403295 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 09:35:54.212968  403295 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.249757514Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765619285249731139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:189831,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=85e42871-9eda-426d-b0ca-1da1de0ef9e4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.251039887Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a11c192-c8e4-4030-848e-71713533d9b7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.251125839Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a11c192-c8e4-4030-848e-71713533d9b7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.251345803Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a033e03e6998717fd54c7a982e213390e2a93a581ab7db6633f97e03610bcce,PodSandboxId:1c66dfad4cada9aca6045bea029b2985bbb8d379f23e2de4ff19f786a4d9e0a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765618055477026408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f8e40cbfe4fe5278f2c2ebdcd0a07ddc2799d2d471fefccfdb255eba910739,PodSandboxId:981ef7045b5a7ba2a152b10d09e8904c2083766c230c9d447a92f2977a4b7b40,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765618055790289701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6d3aa793a5d4bdfe5b2b3732af0374d4cddf7fb3eedc48b5fe0e21ee321de3,PodSandboxId:45411b104740ddd5117e91200d1a690e438ae7a32255f1cb171b6bbbdfbcd892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765618055498638556,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2237eb3cfb9427342a03e244cb23da2440800776c88d39def517cae2df0c9081,PodSandboxId:7e0c763b37cf6447b07c9f367a9d448f4b876618c1146aeac6cd0602ada58d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765618052710769135,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136e051ce4b87ff42f1c7596ef9ad758,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a44d955020176708ebbad74e424fc16d6e74e1600a4aa935cc2ad1fb033d01,PodSandboxId:79e99323f18bbf1152b9765898d58f53eb3c609f3fc414b0680b3cd72d5fb60d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONT
AINER_RUNNING,CreatedAt:1765618052613820224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47ee0aabb1efdd09eec154c30881929f05d58a9520e541025def583ab954d02,PodSandboxId:748de94d0a3965d93b15772c21ed0a143d1dc80ebe6b1cc7b52869edf3fc6961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765618052599039262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9,PodSandboxId:5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765617926050470093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421,PodSandboxId:e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765617926038294787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb,PodSandboxId:235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765617900109965284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad,PodSandboxId:ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765617900149382376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 623733f12fc7a2bd3df192b3433220d0,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695,PodSandboxId:f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765617900061465369,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198
bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c,PodSandboxId:3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765617899883988957,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a11c192-c8e4-4030-848e-71713533d9b7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.288157124Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f87e436-47ec-4fd8-b84b-a285fb23728f name=/runtime.v1.RuntimeService/Version
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.288234186Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f87e436-47ec-4fd8-b84b-a285fb23728f name=/runtime.v1.RuntimeService/Version
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.289746441Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d823804f-a95c-43be-bde8-6ebee41196c6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.290365998Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765619285290342149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:189831,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d823804f-a95c-43be-bde8-6ebee41196c6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.291536792Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5086e65f-64e3-4350-af8f-d270607d63c2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.291603995Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5086e65f-64e3-4350-af8f-d270607d63c2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.291844638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a033e03e6998717fd54c7a982e213390e2a93a581ab7db6633f97e03610bcce,PodSandboxId:1c66dfad4cada9aca6045bea029b2985bbb8d379f23e2de4ff19f786a4d9e0a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765618055477026408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f8e40cbfe4fe5278f2c2ebdcd0a07ddc2799d2d471fefccfdb255eba910739,PodSandboxId:981ef7045b5a7ba2a152b10d09e8904c2083766c230c9d447a92f2977a4b7b40,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765618055790289701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6d3aa793a5d4bdfe5b2b3732af0374d4cddf7fb3eedc48b5fe0e21ee321de3,PodSandboxId:45411b104740ddd5117e91200d1a690e438ae7a32255f1cb171b6bbbdfbcd892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765618055498638556,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2237eb3cfb9427342a03e244cb23da2440800776c88d39def517cae2df0c9081,PodSandboxId:7e0c763b37cf6447b07c9f367a9d448f4b876618c1146aeac6cd0602ada58d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765618052710769135,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136e051ce4b87ff42f1c7596ef9ad758,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a44d955020176708ebbad74e424fc16d6e74e1600a4aa935cc2ad1fb033d01,PodSandboxId:79e99323f18bbf1152b9765898d58f53eb3c609f3fc414b0680b3cd72d5fb60d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONT
AINER_RUNNING,CreatedAt:1765618052613820224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47ee0aabb1efdd09eec154c30881929f05d58a9520e541025def583ab954d02,PodSandboxId:748de94d0a3965d93b15772c21ed0a143d1dc80ebe6b1cc7b52869edf3fc6961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765618052599039262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9,PodSandboxId:5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765617926050470093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421,PodSandboxId:e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765617926038294787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb,PodSandboxId:235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765617900109965284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad,PodSandboxId:ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765617900149382376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 623733f12fc7a2bd3df192b3433220d0,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695,PodSandboxId:f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765617900061465369,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198
bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c,PodSandboxId:3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765617899883988957,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5086e65f-64e3-4350-af8f-d270607d63c2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.321337254Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b84a22b7-c405-4e34-9d25-74f3845b7356 name=/runtime.v1.RuntimeService/Version
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.321488835Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b84a22b7-c405-4e34-9d25-74f3845b7356 name=/runtime.v1.RuntimeService/Version
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.322773270Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd4ec1ac-1d12-4dd9-a340-6fee87d63501 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.323397901Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765619285323376554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:189831,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd4ec1ac-1d12-4dd9-a340-6fee87d63501 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.324286178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe8af099-c599-4c6e-bc6b-7c16c75635bb name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.324493765Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe8af099-c599-4c6e-bc6b-7c16c75635bb name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.324906331Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a033e03e6998717fd54c7a982e213390e2a93a581ab7db6633f97e03610bcce,PodSandboxId:1c66dfad4cada9aca6045bea029b2985bbb8d379f23e2de4ff19f786a4d9e0a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765618055477026408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f8e40cbfe4fe5278f2c2ebdcd0a07ddc2799d2d471fefccfdb255eba910739,PodSandboxId:981ef7045b5a7ba2a152b10d09e8904c2083766c230c9d447a92f2977a4b7b40,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765618055790289701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6d3aa793a5d4bdfe5b2b3732af0374d4cddf7fb3eedc48b5fe0e21ee321de3,PodSandboxId:45411b104740ddd5117e91200d1a690e438ae7a32255f1cb171b6bbbdfbcd892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765618055498638556,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2237eb3cfb9427342a03e244cb23da2440800776c88d39def517cae2df0c9081,PodSandboxId:7e0c763b37cf6447b07c9f367a9d448f4b876618c1146aeac6cd0602ada58d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765618052710769135,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136e051ce4b87ff42f1c7596ef9ad758,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a44d955020176708ebbad74e424fc16d6e74e1600a4aa935cc2ad1fb033d01,PodSandboxId:79e99323f18bbf1152b9765898d58f53eb3c609f3fc414b0680b3cd72d5fb60d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONT
AINER_RUNNING,CreatedAt:1765618052613820224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47ee0aabb1efdd09eec154c30881929f05d58a9520e541025def583ab954d02,PodSandboxId:748de94d0a3965d93b15772c21ed0a143d1dc80ebe6b1cc7b52869edf3fc6961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765618052599039262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9,PodSandboxId:5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765617926050470093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421,PodSandboxId:e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765617926038294787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb,PodSandboxId:235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765617900109965284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad,PodSandboxId:ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765617900149382376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 623733f12fc7a2bd3df192b3433220d0,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695,PodSandboxId:f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765617900061465369,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198
bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c,PodSandboxId:3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765617899883988957,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe8af099-c599-4c6e-bc6b-7c16c75635bb name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.362652771Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b234089b-df46-466f-a1cf-59a9368e54ca name=/runtime.v1.RuntimeService/Version
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.362724399Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b234089b-df46-466f-a1cf-59a9368e54ca name=/runtime.v1.RuntimeService/Version
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.363973016Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e948a96b-e41a-4aa7-b940-98603941a5bf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.365070969Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765619285365001964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:189831,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e948a96b-e41a-4aa7-b940-98603941a5bf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.366544946Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8541f43-b1a2-4fa2-a595-b81f9f48ed0a name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.366776965Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8541f43-b1a2-4fa2-a595-b81f9f48ed0a name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:48:05 functional-553391 crio[5772]: time="2025-12-13 09:48:05.367206976Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a033e03e6998717fd54c7a982e213390e2a93a581ab7db6633f97e03610bcce,PodSandboxId:1c66dfad4cada9aca6045bea029b2985bbb8d379f23e2de4ff19f786a4d9e0a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765618055477026408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f8e40cbfe4fe5278f2c2ebdcd0a07ddc2799d2d471fefccfdb255eba910739,PodSandboxId:981ef7045b5a7ba2a152b10d09e8904c2083766c230c9d447a92f2977a4b7b40,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765618055790289701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6d3aa793a5d4bdfe5b2b3732af0374d4cddf7fb3eedc48b5fe0e21ee321de3,PodSandboxId:45411b104740ddd5117e91200d1a690e438ae7a32255f1cb171b6bbbdfbcd892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765618055498638556,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2237eb3cfb9427342a03e244cb23da2440800776c88d39def517cae2df0c9081,PodSandboxId:7e0c763b37cf6447b07c9f367a9d448f4b876618c1146aeac6cd0602ada58d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765618052710769135,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136e051ce4b87ff42f1c7596ef9ad758,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a44d955020176708ebbad74e424fc16d6e74e1600a4aa935cc2ad1fb033d01,PodSandboxId:79e99323f18bbf1152b9765898d58f53eb3c609f3fc414b0680b3cd72d5fb60d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONT
AINER_RUNNING,CreatedAt:1765618052613820224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47ee0aabb1efdd09eec154c30881929f05d58a9520e541025def583ab954d02,PodSandboxId:748de94d0a3965d93b15772c21ed0a143d1dc80ebe6b1cc7b52869edf3fc6961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765618052599039262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9,PodSandboxId:5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765617926050470093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421,PodSandboxId:e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765617926038294787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb,PodSandboxId:235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765617900109965284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad,PodSandboxId:ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765617900149382376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 623733f12fc7a2bd3df192b3433220d0,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695,PodSandboxId:f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765617900061465369,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198
bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c,PodSandboxId:3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765617899883988957,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8541f43-b1a2-4fa2-a595-b81f9f48ed0a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f0f8e40cbfe4f       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139   20 minutes ago      Running             coredns                   2                   981ef7045b5a7       coredns-7d764666f9-rjg8z                    kube-system
	4b6d3aa793a5d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   20 minutes ago      Running             storage-provisioner       3                   45411b104740d       storage-provisioner                         kube-system
	4a033e03e6998       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   20 minutes ago      Running             kube-proxy                2                   1c66dfad4cada       kube-proxy-nmxbh                            kube-system
	2237eb3cfb942       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   20 minutes ago      Running             kube-apiserver            0                   7e0c763b37cf6       kube-apiserver-functional-553391            kube-system
	17a44d9550201       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   20 minutes ago      Running             etcd                      2                   79e99323f18bb       etcd-functional-553391                      kube-system
	c47ee0aabb1ef       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   20 minutes ago      Running             kube-controller-manager   2                   748de94d0a396       kube-controller-manager-functional-553391   kube-system
	74dbec4b8aef1       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139   22 minutes ago      Exited              coredns                   1                   5e9144b389dce       coredns-7d764666f9-rjg8z                    kube-system
	b9244bb17b848       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   22 minutes ago      Exited              storage-provisioner       2                   e6342f727896e       storage-provisioner                         kube-system
	15ccf55277802       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   23 minutes ago      Exited              kube-scheduler            1                   ad35e4e969107       kube-scheduler-functional-553391            kube-system
	43e2ebf7101ff       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   23 minutes ago      Exited              etcd                      1                   235bdf467969a       etcd-functional-553391                      kube-system
	bef6c74863ea7       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   23 minutes ago      Exited              kube-controller-manager   1                   f1efa84ba4774       kube-controller-manager-functional-553391   kube-system
	a7de46befbc34       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   23 minutes ago      Exited              kube-proxy                1                   3c085dc8222fb       kube-proxy-nmxbh                            kube-system
	
	
	==> coredns [74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:38849 - 10587 "HINFO IN 1179697731504859025.6635090832342881038. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0581936s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f0f8e40cbfe4fe5278f2c2ebdcd0a07ddc2799d2d471fefccfdb255eba910739] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:34740 - 64681 "HINFO IN 9182635211618943717.7247219486871022041. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.096462358s
	
	
	==> describe nodes <==
	Name:               functional-553391
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-553391
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
	                    minikube.k8s.io/name=functional-553391
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_24_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:23:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-553391
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:48:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:45:26 +0000   Sat, 13 Dec 2025 09:23:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:45:26 +0000   Sat, 13 Dec 2025 09:23:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:45:26 +0000   Sat, 13 Dec 2025 09:23:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:45:26 +0000   Sat, 13 Dec 2025 09:24:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    functional-553391
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 fdf3a8f76472433fb21c3307ef40831b
	  System UUID:                fdf3a8f7-6472-433f-b21c-3307ef40831b
	  Boot ID:                    3c8d40c0-0e2d-4a05-9897-d24bc6cacbb9
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-rjg8z                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     24m
	  kube-system                 etcd-functional-553391                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         24m
	  kube-system                 kube-apiserver-functional-553391             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-functional-553391    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-proxy-nmxbh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-scheduler-functional-553391             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  24m   node-controller  Node functional-553391 event: Registered Node functional-553391 in Controller
	  Normal  RegisteredNode  22m   node-controller  Node functional-553391 event: Registered Node functional-553391 in Controller
	  Normal  RegisteredNode  20m   node-controller  Node functional-553391 event: Registered Node functional-553391 in Controller
	
	
	==> dmesg <==
	[Dec13 09:23] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000049] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004691] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.166245] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084827] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.098140] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.130337] kauditd_printk_skb: 171 callbacks suppressed
	[Dec13 09:24] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.600381] kauditd_printk_skb: 248 callbacks suppressed
	[ +35.932646] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 09:25] kauditd_printk_skb: 356 callbacks suppressed
	[  +1.617310] kauditd_printk_skb: 42 callbacks suppressed
	[ +12.973574] kauditd_printk_skb: 12 callbacks suppressed
	[Dec13 09:27] kauditd_printk_skb: 209 callbacks suppressed
	[  +3.605359] kauditd_printk_skb: 153 callbacks suppressed
	[  +6.110507] kauditd_printk_skb: 133 callbacks suppressed
	[Dec13 09:31] kauditd_printk_skb: 36 callbacks suppressed
	[Dec13 09:35] kauditd_printk_skb: 2 callbacks suppressed
	[Dec13 09:38] kauditd_printk_skb: 2 callbacks suppressed
	[Dec13 09:40] crun[9283]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[Dec13 09:41] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [17a44d955020176708ebbad74e424fc16d6e74e1600a4aa935cc2ad1fb033d01] <==
	{"level":"warn","ts":"2025-12-13T09:27:33.809713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.823612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.829416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.840008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.850256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.858039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.864984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.872083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.891523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.905524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.916312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.922300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.931561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.941621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.945560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.993699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50688","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T09:37:33.367003Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":959}
	{"level":"info","ts":"2025-12-13T09:37:33.377296Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":959,"took":"9.242015ms","hash":2537724432,"current-db-size-bytes":2850816,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2850816,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2025-12-13T09:37:33.377568Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2537724432,"revision":959,"compact-revision":-1}
	{"level":"info","ts":"2025-12-13T09:42:33.374708Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1250}
	{"level":"info","ts":"2025-12-13T09:42:33.379751Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1250,"took":"4.321195ms","hash":629441587,"current-db-size-bytes":2850816,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1765376,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-12-13T09:42:33.379788Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":629441587,"revision":1250,"compact-revision":959}
	{"level":"info","ts":"2025-12-13T09:47:33.383456Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1510}
	{"level":"info","ts":"2025-12-13T09:47:33.387616Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1510,"took":"3.829067ms","hash":1486573389,"current-db-size-bytes":2850816,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1687552,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-12-13T09:47:33.387663Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1486573389,"revision":1510,"compact-revision":1250}
	
	
	==> etcd [43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb] <==
	{"level":"warn","ts":"2025-12-13T09:25:23.910219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:23.923853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:23.927780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:23.936775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:23.944573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:23.953555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:24.035457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36150","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T09:25:51.349706Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T09:25:51.349772Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-553391","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.38:2380"],"advertise-client-urls":["https://192.168.39.38:2379"]}
	{"level":"error","ts":"2025-12-13T09:25:51.357195Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T09:25:51.442258Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T09:25:51.443776Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:25:51.443862Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"38b26e584d45e0da","current-leader-member-id":"38b26e584d45e0da"}
	{"level":"warn","ts":"2025-12-13T09:25:51.443944Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T09:25:51.444031Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T09:25:51.444041Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:25:51.444058Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-13T09:25:51.444015Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-13T09:25:51.444108Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.38:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T09:25:51.444134Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.38:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T09:25:51.444142Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.38:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:25:51.447967Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.38:2380"}
	{"level":"error","ts":"2025-12-13T09:25:51.448071Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.38:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:25:51.448106Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.38:2380"}
	{"level":"info","ts":"2025-12-13T09:25:51.448114Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-553391","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.38:2380"],"advertise-client-urls":["https://192.168.39.38:2379"]}
	
	
	==> kernel <==
	 09:48:05 up 24 min,  0 users,  load average: 0.80, 0.32, 0.22
	Linux functional-553391 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [2237eb3cfb9427342a03e244cb23da2440800776c88d39def517cae2df0c9081] <==
	I1213 09:27:34.752063       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1213 09:27:34.752152       1 aggregator.go:187] initial CRD sync complete...
	I1213 09:27:34.752161       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 09:27:34.752166       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 09:27:34.752170       1 cache.go:39] Caches are synced for autoregister controller
	I1213 09:27:34.755443       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 09:27:34.768964       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:27:34.989792       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 09:27:35.552170       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1213 09:27:36.683227       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:27:36.740625       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 09:27:36.773702       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:27:36.784227       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:27:38.099192       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:27:38.250187       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 09:27:38.348439       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:31:44.252988       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.31.242"}
	I1213 09:31:47.762685       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.66.94"}
	I1213 09:31:51.195438       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.98.33.51"}
	I1213 09:35:55.088227       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 09:35:55.327550       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.202.53"}
	I1213 09:35:55.349609       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.109.158"}
	I1213 09:37:34.662190       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:38:04.053710       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.57.7"}
	I1213 09:47:34.663021       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695] <==
	I1213 09:25:27.972668       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.974789       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.972832       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.974966       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.975360       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.972929       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.977747       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.977858       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.977973       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.978099       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.978132       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.978207       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.978922       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.979032       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.979085       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.984323       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.984406       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.984450       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.984501       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.988096       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:28.008019       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:28.066414       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:28.084492       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:28.084512       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 09:25:28.084517       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-controller-manager [c47ee0aabb1efdd09eec154c30881929f05d58a9520e541025def583ab954d02] <==
	I1213 09:27:37.929063       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.959660       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.959721       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.959741       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.959800       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.960015       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.960080       1 range_allocator.go:177] "Sending events to api server"
	I1213 09:27:37.960131       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1213 09:27:37.960137       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:27:37.960141       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.959682       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.960314       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1213 09:27:37.960382       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-553391"
	I1213 09:27:37.960424       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1213 09:27:37.968967       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.971598       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.972793       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 09:27:37.972804       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1213 09:27:37.986319       1 shared_informer.go:377] "Caches are synced"
	E1213 09:35:55.174531       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:55.184739       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:55.198136       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:55.216383       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:55.221609       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:55.233963       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [4a033e03e6998717fd54c7a982e213390e2a93a581ab7db6633f97e03610bcce] <==
	I1213 09:27:36.194743       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:27:36.296159       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:36.299041       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.38"]
	E1213 09:27:36.299525       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:27:36.369439       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 09:27:36.369524       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 09:27:36.369545       1 server_linux.go:136] "Using iptables Proxier"
	I1213 09:27:36.381798       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:27:36.382110       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 09:27:36.382141       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:27:36.388769       1 config.go:200] "Starting service config controller"
	I1213 09:27:36.389028       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:27:36.389070       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:27:36.389076       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:27:36.389235       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:27:36.389535       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:27:36.390638       1 config.go:309] "Starting node config controller"
	I1213 09:27:36.390719       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:27:36.489796       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:27:36.489917       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:27:36.489987       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 09:27:36.491158       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c] <==
	I1213 09:25:00.988777       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:25:25.893454       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:25.893556       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.38"]
	E1213 09:25:25.893655       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:25:25.940800       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 09:25:25.940970       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 09:25:25.940997       1 server_linux.go:136] "Using iptables Proxier"
	I1213 09:25:25.950368       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:25:25.950781       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 09:25:25.950797       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:25:25.953179       1 config.go:200] "Starting service config controller"
	I1213 09:25:25.954715       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:25:25.953595       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:25:25.955069       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:25:25.953610       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:25:25.955255       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:25:25.959118       1 config.go:309] "Starting node config controller"
	I1213 09:25:25.959146       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:25:26.055988       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:25:26.056061       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:25:26.056143       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 09:25:26.060009       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad] <==
	E1213 09:25:24.739405       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1213 09:25:24.739461       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1213 09:25:24.739526       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1213 09:25:24.739572       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1213 09:25:24.739642       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1213 09:25:24.739677       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1213 09:25:24.739770       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1213 09:25:24.739826       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1213 09:25:24.740002       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1213 09:25:24.740105       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1213 09:25:24.740186       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1213 09:25:24.740250       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1213 09:25:24.740316       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1213 09:25:24.740394       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1213 09:25:24.740516       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1213 09:25:24.740571       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1213 09:25:24.740616       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1213 09:25:24.740653       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1213 09:25:27.991267       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:51.355821       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 09:25:51.365725       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 09:25:51.365754       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:25:51.368478       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 09:25:51.368492       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 09:25:51.368520       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 13 09:47:32 functional-553391 kubelet[6136]: E1213 09:47:32.377830    6136 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765619252377318026  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189831}  inodes_used:{value:89}}"
	Dec 13 09:47:32 functional-553391 kubelet[6136]: E1213 09:47:32.377972    6136 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765619252377318026  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189831}  inodes_used:{value:89}}"
	Dec 13 09:47:32 functional-553391 kubelet[6136]: E1213 09:47:32.981541    6136 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-553391" containerName="kube-scheduler"
	Dec 13 09:47:32 functional-553391 kubelet[6136]: E1213 09:47:32.991016    6136 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists"
	Dec 13 09:47:32 functional-553391 kubelet[6136]: E1213 09:47:32.991132    6136 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists" pod="kube-system/kube-scheduler-functional-553391"
	Dec 13 09:47:32 functional-553391 kubelet[6136]: E1213 09:47:32.991150    6136 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists" pod="kube-system/kube-scheduler-functional-553391"
	Dec 13 09:47:32 functional-553391 kubelet[6136]: E1213 09:47:32.991221    6136 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-553391" podUID="623733f12fc7a2bd3df192b3433220d0"
	Dec 13 09:47:42 functional-553391 kubelet[6136]: E1213 09:47:42.380497    6136 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765619262379930965  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189831}  inodes_used:{value:89}}"
	Dec 13 09:47:42 functional-553391 kubelet[6136]: E1213 09:47:42.380550    6136 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765619262379930965  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189831}  inodes_used:{value:89}}"
	Dec 13 09:47:44 functional-553391 kubelet[6136]: E1213 09:47:44.981525    6136 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-553391" containerName="etcd"
	Dec 13 09:47:45 functional-553391 kubelet[6136]: E1213 09:47:45.981321    6136 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-553391" containerName="kube-scheduler"
	Dec 13 09:47:45 functional-553391 kubelet[6136]: E1213 09:47:45.996815    6136 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists"
	Dec 13 09:47:45 functional-553391 kubelet[6136]: E1213 09:47:45.996968    6136 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists" pod="kube-system/kube-scheduler-functional-553391"
	Dec 13 09:47:45 functional-553391 kubelet[6136]: E1213 09:47:45.996989    6136 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists" pod="kube-system/kube-scheduler-functional-553391"
	Dec 13 09:47:45 functional-553391 kubelet[6136]: E1213 09:47:45.997212    6136 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-553391" podUID="623733f12fc7a2bd3df192b3433220d0"
	Dec 13 09:47:52 functional-553391 kubelet[6136]: E1213 09:47:52.383719    6136 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765619272383151116  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189831}  inodes_used:{value:89}}"
	Dec 13 09:47:52 functional-553391 kubelet[6136]: E1213 09:47:52.383744    6136 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765619272383151116  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189831}  inodes_used:{value:89}}"
	Dec 13 09:47:53 functional-553391 kubelet[6136]: E1213 09:47:53.982216    6136 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-553391" containerName="kube-controller-manager"
	Dec 13 09:47:57 functional-553391 kubelet[6136]: E1213 09:47:57.981493    6136 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-553391" containerName="kube-scheduler"
	Dec 13 09:47:57 functional-553391 kubelet[6136]: E1213 09:47:57.992115    6136 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists"
	Dec 13 09:47:57 functional-553391 kubelet[6136]: E1213 09:47:57.992160    6136 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists" pod="kube-system/kube-scheduler-functional-553391"
	Dec 13 09:47:57 functional-553391 kubelet[6136]: E1213 09:47:57.992174    6136 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists" pod="kube-system/kube-scheduler-functional-553391"
	Dec 13 09:47:57 functional-553391 kubelet[6136]: E1213 09:47:57.992245    6136 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-553391" podUID="623733f12fc7a2bd3df192b3433220d0"
	Dec 13 09:48:02 functional-553391 kubelet[6136]: E1213 09:48:02.385654    6136 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765619282385168585  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189831}  inodes_used:{value:89}}"
	Dec 13 09:48:02 functional-553391 kubelet[6136]: E1213 09:48:02.385704    6136 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765619282385168585  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189831}  inodes_used:{value:89}}"
	
	
	==> storage-provisioner [4b6d3aa793a5d4bdfe5b2b3732af0374d4cddf7fb3eedc48b5fe0e21ee321de3] <==
	W1213 09:47:41.770084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:47:43.773940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:47:43.782768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:47:45.786864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:47:45.793979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:47:47.797755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:47:47.803029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:47:49.806446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:47:49.814981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:47:51.818398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:47:51.823209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:47:53.827994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:47:53.833675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:47:55.837241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:47:55.846295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:47:57.849863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:47:57.855738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:47:59.859317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:47:59.868688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:48:01.872150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:48:01.877648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:48:03.881249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:48:03.890416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:48:05.895143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:48:05.905785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421] <==
	I1213 09:25:26.201262       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 09:25:26.213115       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 09:25:26.215708       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 09:25:26.220587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:29.676396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:33.936290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:37.534481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:40.587597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:43.611039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:43.616691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:25:43.616926       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 09:25:43.617034       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a807b17-b228-43c8-97ae-e7e16ec2cdf4", APIVersion:"v1", ResourceVersion:"532", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-553391_d215204c-1541-413a-b16f-a41e2460e6c8 became leader
	I1213 09:25:43.617264       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-553391_d215204c-1541-413a-b16f-a41e2460e6c8!
	W1213 09:25:43.620228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:43.629789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:25:43.717650       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-553391_d215204c-1541-413a-b16f-a41e2460e6c8!
	W1213 09:25:45.635212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:45.648126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:47.651491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:47.657022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:49.662658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:49.675580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-553391 -n functional-553391
helpers_test.go:270: (dbg) Run:  kubectl --context functional-553391 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-5758569b79-mc2sr hello-node-connect-9f67c86d4-5k96g mysql-7d7b65bc95-bmf88 sp-pod dashboard-metrics-scraper-5565989548-kmw97 kubernetes-dashboard-b84665fb8-fphhq
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-553391 describe pod busybox-mount hello-node-5758569b79-mc2sr hello-node-connect-9f67c86d4-5k96g mysql-7d7b65bc95-bmf88 sp-pod dashboard-metrics-scraper-5565989548-kmw97 kubernetes-dashboard-b84665fb8-fphhq
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-553391 describe pod busybox-mount hello-node-5758569b79-mc2sr hello-node-connect-9f67c86d4-5k96g mysql-7d7b65bc95-bmf88 sp-pod dashboard-metrics-scraper-5565989548-kmw97 kubernetes-dashboard-b84665fb8-fphhq: exit status 1 (97.547957ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    Environment:  <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-42dlg (ro)
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-42dlg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-5758569b79-mc2sr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8f569 (ro)
	Volumes:
	  kube-api-access-8f569:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-connect-9f67c86d4-5k96g
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lv7bb (ro)
	Volumes:
	  kube-api-access-lv7bb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             mysql-7d7b65bc95-bmf88
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=mysql
	                  pod-template-hash=7d7b65bc95
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-7d7b65bc95
	Containers:
	  mysql:
	    Image:      public.ecr.aws/docker/library/mysql:8.4
	    Port:       3306/TCP (mysql)
	    Host Port:  0/TCP (mysql)
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q89gb (ro)
	Volumes:
	  kube-api-access-q89gb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Image:        public.ecr.aws/nginx/nginx:alpine
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p9sfg (ro)
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-p9sfg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-kmw97" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-fphhq" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-553391 describe pod busybox-mount hello-node-5758569b79-mc2sr hello-node-connect-9f67c86d4-5k96g mysql-7d7b65bc95-bmf88 sp-pod dashboard-metrics-scraper-5565989548-kmw97 kubernetes-dashboard-b84665fb8-fphhq: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (602.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (368.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [4b1284e2-956a-4e4a-b504-57f20fa9a365] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004338682s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-553391 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-553391 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-553391 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-553391 apply -f testdata/storage-provisioner/pod.yaml
I1213 09:32:01.480785  391877 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [cf71624e-8cb3-41b3-b5c1-b8594032a759] Pending
E1213 09:32:37.813342  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:33:05.522047  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:33:56.551641  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:35:19.625463  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-553391 -n functional-553391
functional_test_pvc_test.go:140: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-12-13 09:38:01.719085755 +0000 UTC m=+1592.447046366
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-553391 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-553391 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
myfrontend:
Image:        public.ecr.aws/nginx/nginx:alpine
Port:         <none>
Host Port:    <none>
Environment:  <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p9sfg (ro)
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-p9sfg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-553391 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-553391 logs sp-pod -n default:
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-553391 -n functional-553391
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-553391 logs -n 25: (1.273514955s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image     │ functional-553391 image ls                                                                                                                          │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:31 UTC │ 13 Dec 25 09:31 UTC │
	│ image     │ functional-553391 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                              │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:31 UTC │ 13 Dec 25 09:31 UTC │
	│ image     │ functional-553391 image ls                                                                                                                          │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:31 UTC │ 13 Dec 25 09:31 UTC │
	│ image     │ functional-553391 image save --daemon kicbase/echo-server:functional-553391 --alsologtostderr                                                       │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:31 UTC │ 13 Dec 25 09:31 UTC │
	│ ssh       │ functional-553391 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ ssh       │ functional-553391 ssh sudo umount -f /mount-9p                                                                                                      │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ ssh       │ functional-553391 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ mount     │ -p functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2088139448/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ ssh       │ functional-553391 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ ssh       │ functional-553391 ssh -- ls -la /mount-9p                                                                                                           │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ ssh       │ functional-553391 ssh sudo umount -f /mount-9p                                                                                                      │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ ssh       │ functional-553391 ssh findmnt -T /mount1                                                                                                            │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ mount     │ -p functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3184993173/001:/mount3 --alsologtostderr -v=1                │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ mount     │ -p functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3184993173/001:/mount2 --alsologtostderr -v=1                │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ mount     │ -p functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3184993173/001:/mount1 --alsologtostderr -v=1                │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ ssh       │ functional-553391 ssh findmnt -T /mount1                                                                                                            │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ ssh       │ functional-553391 ssh findmnt -T /mount2                                                                                                            │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ ssh       │ functional-553391 ssh findmnt -T /mount3                                                                                                            │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ mount     │ -p functional-553391 --kill=true                                                                                                                    │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ ssh       │ functional-553391 ssh echo hello                                                                                                                    │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ ssh       │ functional-553391 ssh cat /etc/hostname                                                                                                             │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ start     │ -p functional-553391 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ start     │ -p functional-553391 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                   │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ start     │ -p functional-553391 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-553391 --alsologtostderr -v=1                                                                                      │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:35:54
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:35:54.159923  403295 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:35:54.160027  403295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:35:54.160032  403295 out.go:374] Setting ErrFile to fd 2...
	I1213 09:35:54.160036  403295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:35:54.160364  403295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 09:35:54.160842  403295 out.go:368] Setting JSON to false
	I1213 09:35:54.161750  403295 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4703,"bootTime":1765613851,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:35:54.161818  403295 start.go:143] virtualization: kvm guest
	I1213 09:35:54.163745  403295 out.go:179] * [functional-553391] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1213 09:35:54.165254  403295 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 09:35:54.165269  403295 notify.go:221] Checking for updates...
	I1213 09:35:54.167675  403295 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:35:54.168945  403295 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 09:35:54.170341  403295 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	I1213 09:35:54.171825  403295 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:35:54.173115  403295 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:35:54.174891  403295 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:35:54.175647  403295 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:35:54.206662  403295 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1213 09:35:54.207891  403295 start.go:309] selected driver: kvm2
	I1213 09:35:54.207911  403295 start.go:927] validating driver "kvm2" against &{Name:functional-553391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-553391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:35:54.208021  403295 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:35:54.210175  403295 out.go:203] 
	W1213 09:35:54.211537  403295 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 09:35:54.212968  403295 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.462863964Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765618682462839094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:164172,},InodesUsed:&UInt64Value{Value:73,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c19c1378-dc7e-4143-8efd-57b58c04197f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.463660709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4adc6bd8-dc04-415e-8cd5-74e3d6d233c4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.463760026Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4adc6bd8-dc04-415e-8cd5-74e3d6d233c4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.464033601Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a033e03e6998717fd54c7a982e213390e2a93a581ab7db6633f97e03610bcce,PodSandboxId:1c66dfad4cada9aca6045bea029b2985bbb8d379f23e2de4ff19f786a4d9e0a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765618055477026408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f8e40cbfe4fe5278f2c2ebdcd0a07ddc2799d2d471fefccfdb255eba910739,PodSandboxId:981ef7045b5a7ba2a152b10d09e8904c2083766c230c9d447a92f2977a4b7b40,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765618055790289701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6d3aa793a5d4bdfe5b2b3732af0374d4cddf7fb3eedc48b5fe0e21ee321de3,PodSandboxId:45411b104740ddd5117e91200d1a690e438ae7a32255f1cb171b6bbbdfbcd892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765618055498638556,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2237eb3cfb9427342a03e244cb23da2440800776c88d39def517cae2df0c9081,PodSandboxId:7e0c763b37cf6447b07c9f367a9d448f4b876618c1146aeac6cd0602ada58d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765618052710769135,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136e051ce4b87ff42f1c7596ef9ad758,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a44d955020176708ebbad74e424fc16d6e74e1600a4aa935cc2ad1fb033d01,PodSandboxId:79e99323f18bbf1152b9765898d58f53eb3c609f3fc414b0680b3cd72d5fb60d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONT
AINER_RUNNING,CreatedAt:1765618052613820224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47ee0aabb1efdd09eec154c30881929f05d58a9520e541025def583ab954d02,PodSandboxId:748de94d0a3965d93b15772c21ed0a143d1dc80ebe6b1cc7b52869edf3fc6961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765618052599039262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9,PodSandboxId:5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765617926050470093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421,PodSandboxId:e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765617926038294787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb,PodSandboxId:235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765617900109965284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad,PodSandboxId:ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765617900149382376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 623733f12fc7a2bd3df192b3433220d0,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695,PodSandboxId:f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765617900061465369,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198
bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c,PodSandboxId:3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765617899883988957,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4adc6bd8-dc04-415e-8cd5-74e3d6d233c4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.501318013Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27eace95-b045-4629-84a1-ba983b001c9a name=/runtime.v1.RuntimeService/Version
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.501410860Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27eace95-b045-4629-84a1-ba983b001c9a name=/runtime.v1.RuntimeService/Version
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.502676959Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f49ecaf-5168-4b21-add9-e494a68f3085 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.503211889Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765618682503187821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:164172,},InodesUsed:&UInt64Value{Value:73,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f49ecaf-5168-4b21-add9-e494a68f3085 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.504253645Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84e08c9f-d826-470f-868b-60688ccdee56 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.504327761Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84e08c9f-d826-470f-868b-60688ccdee56 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.504560780Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a033e03e6998717fd54c7a982e213390e2a93a581ab7db6633f97e03610bcce,PodSandboxId:1c66dfad4cada9aca6045bea029b2985bbb8d379f23e2de4ff19f786a4d9e0a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765618055477026408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f8e40cbfe4fe5278f2c2ebdcd0a07ddc2799d2d471fefccfdb255eba910739,PodSandboxId:981ef7045b5a7ba2a152b10d09e8904c2083766c230c9d447a92f2977a4b7b40,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765618055790289701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6d3aa793a5d4bdfe5b2b3732af0374d4cddf7fb3eedc48b5fe0e21ee321de3,PodSandboxId:45411b104740ddd5117e91200d1a690e438ae7a32255f1cb171b6bbbdfbcd892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765618055498638556,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2237eb3cfb9427342a03e244cb23da2440800776c88d39def517cae2df0c9081,PodSandboxId:7e0c763b37cf6447b07c9f367a9d448f4b876618c1146aeac6cd0602ada58d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765618052710769135,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136e051ce4b87ff42f1c7596ef9ad758,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a44d955020176708ebbad74e424fc16d6e74e1600a4aa935cc2ad1fb033d01,PodSandboxId:79e99323f18bbf1152b9765898d58f53eb3c609f3fc414b0680b3cd72d5fb60d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONT
AINER_RUNNING,CreatedAt:1765618052613820224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47ee0aabb1efdd09eec154c30881929f05d58a9520e541025def583ab954d02,PodSandboxId:748de94d0a3965d93b15772c21ed0a143d1dc80ebe6b1cc7b52869edf3fc6961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765618052599039262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9,PodSandboxId:5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765617926050470093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421,PodSandboxId:e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765617926038294787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb,PodSandboxId:235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765617900109965284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad,PodSandboxId:ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765617900149382376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 623733f12fc7a2bd3df192b3433220d0,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695,PodSandboxId:f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765617900061465369,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198
bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c,PodSandboxId:3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765617899883988957,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84e08c9f-d826-470f-868b-60688ccdee56 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.535378327Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=13cf3f1f-f576-467f-bb98-339fab0a6cf6 name=/runtime.v1.RuntimeService/Version
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.535683078Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=13cf3f1f-f576-467f-bb98-339fab0a6cf6 name=/runtime.v1.RuntimeService/Version
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.537435609Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b69b4cd-98c4-4525-97bb-16202cbeab1e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.537982178Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765618682537958660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:164172,},InodesUsed:&UInt64Value{Value:73,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b69b4cd-98c4-4525-97bb-16202cbeab1e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.539009492Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98842a92-4f22-4f56-a837-dbce1b2f6122 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.539349535Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98842a92-4f22-4f56-a837-dbce1b2f6122 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.539988478Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a033e03e6998717fd54c7a982e213390e2a93a581ab7db6633f97e03610bcce,PodSandboxId:1c66dfad4cada9aca6045bea029b2985bbb8d379f23e2de4ff19f786a4d9e0a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765618055477026408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f8e40cbfe4fe5278f2c2ebdcd0a07ddc2799d2d471fefccfdb255eba910739,PodSandboxId:981ef7045b5a7ba2a152b10d09e8904c2083766c230c9d447a92f2977a4b7b40,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765618055790289701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6d3aa793a5d4bdfe5b2b3732af0374d4cddf7fb3eedc48b5fe0e21ee321de3,PodSandboxId:45411b104740ddd5117e91200d1a690e438ae7a32255f1cb171b6bbbdfbcd892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765618055498638556,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2237eb3cfb9427342a03e244cb23da2440800776c88d39def517cae2df0c9081,PodSandboxId:7e0c763b37cf6447b07c9f367a9d448f4b876618c1146aeac6cd0602ada58d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765618052710769135,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136e051ce4b87ff42f1c7596ef9ad758,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a44d955020176708ebbad74e424fc16d6e74e1600a4aa935cc2ad1fb033d01,PodSandboxId:79e99323f18bbf1152b9765898d58f53eb3c609f3fc414b0680b3cd72d5fb60d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONT
AINER_RUNNING,CreatedAt:1765618052613820224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47ee0aabb1efdd09eec154c30881929f05d58a9520e541025def583ab954d02,PodSandboxId:748de94d0a3965d93b15772c21ed0a143d1dc80ebe6b1cc7b52869edf3fc6961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765618052599039262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9,PodSandboxId:5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765617926050470093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421,PodSandboxId:e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765617926038294787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb,PodSandboxId:235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765617900109965284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad,PodSandboxId:ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765617900149382376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 623733f12fc7a2bd3df192b3433220d0,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695,PodSandboxId:f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765617900061465369,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198
bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c,PodSandboxId:3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765617899883988957,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98842a92-4f22-4f56-a837-dbce1b2f6122 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.569953450Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b32e3541-6a71-44cc-9cd4-9ba961ba6f06 name=/runtime.v1.RuntimeService/Version
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.570052884Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b32e3541-6a71-44cc-9cd4-9ba961ba6f06 name=/runtime.v1.RuntimeService/Version
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.571617179Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92d6e793-03de-4b80-b976-743f1739aaa6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.572336957Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765618682572311502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:164172,},InodesUsed:&UInt64Value{Value:73,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92d6e793-03de-4b80-b976-743f1739aaa6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.573253255Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92d521d1-9b7a-4373-a8e7-f9ce76cc55eb name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.573505434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92d521d1-9b7a-4373-a8e7-f9ce76cc55eb name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:38:02 functional-553391 crio[5772]: time="2025-12-13 09:38:02.574010182Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a033e03e6998717fd54c7a982e213390e2a93a581ab7db6633f97e03610bcce,PodSandboxId:1c66dfad4cada9aca6045bea029b2985bbb8d379f23e2de4ff19f786a4d9e0a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765618055477026408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f8e40cbfe4fe5278f2c2ebdcd0a07ddc2799d2d471fefccfdb255eba910739,PodSandboxId:981ef7045b5a7ba2a152b10d09e8904c2083766c230c9d447a92f2977a4b7b40,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765618055790289701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6d3aa793a5d4bdfe5b2b3732af0374d4cddf7fb3eedc48b5fe0e21ee321de3,PodSandboxId:45411b104740ddd5117e91200d1a690e438ae7a32255f1cb171b6bbbdfbcd892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765618055498638556,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2237eb3cfb9427342a03e244cb23da2440800776c88d39def517cae2df0c9081,PodSandboxId:7e0c763b37cf6447b07c9f367a9d448f4b876618c1146aeac6cd0602ada58d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765618052710769135,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136e051ce4b87ff42f1c7596ef9ad758,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a44d955020176708ebbad74e424fc16d6e74e1600a4aa935cc2ad1fb033d01,PodSandboxId:79e99323f18bbf1152b9765898d58f53eb3c609f3fc414b0680b3cd72d5fb60d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONT
AINER_RUNNING,CreatedAt:1765618052613820224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47ee0aabb1efdd09eec154c30881929f05d58a9520e541025def583ab954d02,PodSandboxId:748de94d0a3965d93b15772c21ed0a143d1dc80ebe6b1cc7b52869edf3fc6961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765618052599039262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9,PodSandboxId:5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765617926050470093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421,PodSandboxId:e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765617926038294787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb,PodSandboxId:235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765617900109965284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad,PodSandboxId:ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765617900149382376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 623733f12fc7a2bd3df192b3433220d0,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695,PodSandboxId:f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765617900061465369,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198
bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c,PodSandboxId:3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765617899883988957,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92d521d1-9b7a-4373-a8e7-f9ce76cc55eb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f0f8e40cbfe4f       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139   10 minutes ago      Running             coredns                   2                   981ef7045b5a7       coredns-7d764666f9-rjg8z                    kube-system
	4b6d3aa793a5d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       3                   45411b104740d       storage-provisioner                         kube-system
	4a033e03e6998       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   10 minutes ago      Running             kube-proxy                2                   1c66dfad4cada       kube-proxy-nmxbh                            kube-system
	2237eb3cfb942       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   10 minutes ago      Running             kube-apiserver            0                   7e0c763b37cf6       kube-apiserver-functional-553391            kube-system
	17a44d9550201       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   10 minutes ago      Running             etcd                      2                   79e99323f18bb       etcd-functional-553391                      kube-system
	c47ee0aabb1ef       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   10 minutes ago      Running             kube-controller-manager   2                   748de94d0a396       kube-controller-manager-functional-553391   kube-system
	74dbec4b8aef1       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139   12 minutes ago      Exited              coredns                   1                   5e9144b389dce       coredns-7d764666f9-rjg8z                    kube-system
	b9244bb17b848       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 minutes ago      Exited              storage-provisioner       2                   e6342f727896e       storage-provisioner                         kube-system
	15ccf55277802       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   13 minutes ago      Exited              kube-scheduler            1                   ad35e4e969107       kube-scheduler-functional-553391            kube-system
	43e2ebf7101ff       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   13 minutes ago      Exited              etcd                      1                   235bdf467969a       etcd-functional-553391                      kube-system
	bef6c74863ea7       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   13 minutes ago      Exited              kube-controller-manager   1                   f1efa84ba4774       kube-controller-manager-functional-553391   kube-system
	a7de46befbc34       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   13 minutes ago      Exited              kube-proxy                1                   3c085dc8222fb       kube-proxy-nmxbh                            kube-system
	
	
	==> coredns [74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:38849 - 10587 "HINFO IN 1179697731504859025.6635090832342881038. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0581936s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f0f8e40cbfe4fe5278f2c2ebdcd0a07ddc2799d2d471fefccfdb255eba910739] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:34740 - 64681 "HINFO IN 9182635211618943717.7247219486871022041. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.096462358s
	
	
	==> describe nodes <==
	Name:               functional-553391
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-553391
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
	                    minikube.k8s.io/name=functional-553391
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_24_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:23:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-553391
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:37:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:32:09 +0000   Sat, 13 Dec 2025 09:23:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:32:09 +0000   Sat, 13 Dec 2025 09:23:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:32:09 +0000   Sat, 13 Dec 2025 09:23:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:32:09 +0000   Sat, 13 Dec 2025 09:24:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    functional-553391
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 fdf3a8f76472433fb21c3307ef40831b
	  System UUID:                fdf3a8f7-6472-433f-b21c-3307ef40831b
	  Boot ID:                    3c8d40c0-0e2d-4a05-9897-d24bc6cacbb9
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-rjg8z                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-functional-553391                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         14m
	  kube-system                 kube-apiserver-functional-553391             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-553391    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-nmxbh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-functional-553391             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  13m   node-controller  Node functional-553391 event: Registered Node functional-553391 in Controller
	  Normal  RegisteredNode  12m   node-controller  Node functional-553391 event: Registered Node functional-553391 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node functional-553391 event: Registered Node functional-553391 in Controller
	
	
	==> dmesg <==
	[Dec13 09:23] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000049] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004691] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.166245] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084827] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.098140] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.130337] kauditd_printk_skb: 171 callbacks suppressed
	[Dec13 09:24] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.600381] kauditd_printk_skb: 248 callbacks suppressed
	[ +35.932646] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 09:25] kauditd_printk_skb: 356 callbacks suppressed
	[  +1.617310] kauditd_printk_skb: 42 callbacks suppressed
	[ +12.973574] kauditd_printk_skb: 12 callbacks suppressed
	[Dec13 09:27] kauditd_printk_skb: 209 callbacks suppressed
	[  +3.605359] kauditd_printk_skb: 153 callbacks suppressed
	[  +6.110507] kauditd_printk_skb: 133 callbacks suppressed
	[Dec13 09:31] kauditd_printk_skb: 36 callbacks suppressed
	[Dec13 09:35] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [17a44d955020176708ebbad74e424fc16d6e74e1600a4aa935cc2ad1fb033d01] <==
	{"level":"warn","ts":"2025-12-13T09:27:33.755682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.771997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.780019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.785864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.793594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.801980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.809713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.823612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.829416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.840008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.850256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.858039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.864984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.872083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.891523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.905524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.916312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.922300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.931561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.941621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.945560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.993699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50688","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T09:37:33.367003Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":959}
	{"level":"info","ts":"2025-12-13T09:37:33.377296Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":959,"took":"9.242015ms","hash":2537724432,"current-db-size-bytes":2850816,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2850816,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2025-12-13T09:37:33.377568Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2537724432,"revision":959,"compact-revision":-1}
	
	
	==> etcd [43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb] <==
	{"level":"warn","ts":"2025-12-13T09:25:23.910219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:23.923853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:23.927780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:23.936775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:23.944573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:23.953555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:24.035457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36150","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T09:25:51.349706Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T09:25:51.349772Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-553391","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.38:2380"],"advertise-client-urls":["https://192.168.39.38:2379"]}
	{"level":"error","ts":"2025-12-13T09:25:51.357195Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T09:25:51.442258Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T09:25:51.443776Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:25:51.443862Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"38b26e584d45e0da","current-leader-member-id":"38b26e584d45e0da"}
	{"level":"warn","ts":"2025-12-13T09:25:51.443944Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T09:25:51.444031Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T09:25:51.444041Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:25:51.444058Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-13T09:25:51.444015Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-13T09:25:51.444108Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.38:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T09:25:51.444134Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.38:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T09:25:51.444142Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.38:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:25:51.447967Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.38:2380"}
	{"level":"error","ts":"2025-12-13T09:25:51.448071Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.38:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:25:51.448106Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.38:2380"}
	{"level":"info","ts":"2025-12-13T09:25:51.448114Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-553391","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.38:2380"],"advertise-client-urls":["https://192.168.39.38:2379"]}
	
	
	==> kernel <==
	 09:38:02 up 14 min,  0 users,  load average: 0.33, 0.23, 0.18
	Linux functional-553391 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [2237eb3cfb9427342a03e244cb23da2440800776c88d39def517cae2df0c9081] <==
	E1213 09:27:34.748815       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1213 09:27:34.752018       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 09:27:34.752063       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1213 09:27:34.752152       1 aggregator.go:187] initial CRD sync complete...
	I1213 09:27:34.752161       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 09:27:34.752166       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 09:27:34.752170       1 cache.go:39] Caches are synced for autoregister controller
	I1213 09:27:34.755443       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 09:27:34.768964       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:27:34.989792       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 09:27:35.552170       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1213 09:27:36.683227       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:27:36.740625       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 09:27:36.773702       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:27:36.784227       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:27:38.099192       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:27:38.250187       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 09:27:38.348439       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:31:44.252988       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.31.242"}
	I1213 09:31:47.762685       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.66.94"}
	I1213 09:31:51.195438       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.98.33.51"}
	I1213 09:35:55.088227       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 09:35:55.327550       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.202.53"}
	I1213 09:35:55.349609       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.109.158"}
	I1213 09:37:34.662190       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695] <==
	I1213 09:25:27.972668       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.974789       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.972832       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.974966       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.975360       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.972929       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.977747       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.977858       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.977973       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.978099       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.978132       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.978207       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.978922       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.979032       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.979085       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.984323       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.984406       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.984450       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.984501       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.988096       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:28.008019       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:28.066414       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:28.084492       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:28.084512       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 09:25:28.084517       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-controller-manager [c47ee0aabb1efdd09eec154c30881929f05d58a9520e541025def583ab954d02] <==
	I1213 09:27:37.929063       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.959660       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.959721       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.959741       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.959800       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.960015       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.960080       1 range_allocator.go:177] "Sending events to api server"
	I1213 09:27:37.960131       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1213 09:27:37.960137       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:27:37.960141       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.959682       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.960314       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1213 09:27:37.960382       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-553391"
	I1213 09:27:37.960424       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1213 09:27:37.968967       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.971598       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.972793       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 09:27:37.972804       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1213 09:27:37.986319       1 shared_informer.go:377] "Caches are synced"
	E1213 09:35:55.174531       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:55.184739       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:55.198136       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:55.216383       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:55.221609       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:55.233963       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [4a033e03e6998717fd54c7a982e213390e2a93a581ab7db6633f97e03610bcce] <==
	I1213 09:27:36.194743       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:27:36.296159       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:36.299041       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.38"]
	E1213 09:27:36.299525       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:27:36.369439       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 09:27:36.369524       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 09:27:36.369545       1 server_linux.go:136] "Using iptables Proxier"
	I1213 09:27:36.381798       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:27:36.382110       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 09:27:36.382141       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:27:36.388769       1 config.go:200] "Starting service config controller"
	I1213 09:27:36.389028       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:27:36.389070       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:27:36.389076       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:27:36.389235       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:27:36.389535       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:27:36.390638       1 config.go:309] "Starting node config controller"
	I1213 09:27:36.390719       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:27:36.489796       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:27:36.489917       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:27:36.489987       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 09:27:36.491158       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c] <==
	I1213 09:25:00.988777       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:25:25.893454       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:25.893556       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.38"]
	E1213 09:25:25.893655       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:25:25.940800       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 09:25:25.940970       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 09:25:25.940997       1 server_linux.go:136] "Using iptables Proxier"
	I1213 09:25:25.950368       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:25:25.950781       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 09:25:25.950797       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:25:25.953179       1 config.go:200] "Starting service config controller"
	I1213 09:25:25.954715       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:25:25.953595       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:25:25.955069       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:25:25.953610       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:25:25.955255       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:25:25.959118       1 config.go:309] "Starting node config controller"
	I1213 09:25:25.959146       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:25:26.055988       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:25:26.056061       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:25:26.056143       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 09:25:26.060009       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad] <==
	E1213 09:25:24.739405       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1213 09:25:24.739461       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1213 09:25:24.739526       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1213 09:25:24.739572       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1213 09:25:24.739642       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1213 09:25:24.739677       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1213 09:25:24.739770       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1213 09:25:24.739826       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1213 09:25:24.740002       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1213 09:25:24.740105       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1213 09:25:24.740186       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1213 09:25:24.740250       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1213 09:25:24.740316       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1213 09:25:24.740394       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1213 09:25:24.740516       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1213 09:25:24.740571       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1213 09:25:24.740616       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1213 09:25:24.740653       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1213 09:25:27.991267       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:51.355821       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 09:25:51.365725       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 09:25:51.365754       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:25:51.368478       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 09:25:51.368492       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 09:25:51.368520       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 13 09:37:32 functional-553391 kubelet[6136]: E1213 09:37:32.069777    6136 manager.go:1119] Failed to create existing container: /kubepods/besteffort/pod5b03d972-1560-487e-8c23-357ba0a288ce/crio-3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b: Error finding container 3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b: Status 404 returned error can't find the container with id 3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b
	Dec 13 09:37:32 functional-553391 kubelet[6136]: E1213 09:37:32.070142    6136 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod623733f12fc7a2bd3df192b3433220d0/crio-ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563: Error finding container ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563: Status 404 returned error can't find the container with id ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563
	Dec 13 09:37:32 functional-553391 kubelet[6136]: E1213 09:37:32.070562    6136 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod9423f6b5da5b329cef63430d36acee6e/crio-235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63: Error finding container 235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63: Status 404 returned error can't find the container with id 235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63
	Dec 13 09:37:32 functional-553391 kubelet[6136]: E1213 09:37:32.071454    6136 manager.go:1119] Failed to create existing container: /kubepods/burstable/poda819a5b5d8a1acac4ff9198bf329d816/crio-f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f: Error finding container f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f: Status 404 returned error can't find the container with id f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f
	Dec 13 09:37:32 functional-553391 kubelet[6136]: E1213 09:37:32.222801    6136 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765618652222344076  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164172}  inodes_used:{value:73}}"
	Dec 13 09:37:32 functional-553391 kubelet[6136]: E1213 09:37:32.222827    6136 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765618652222344076  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164172}  inodes_used:{value:73}}"
	Dec 13 09:37:39 functional-553391 kubelet[6136]: E1213 09:37:39.981825    6136 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-553391" containerName="kube-controller-manager"
	Dec 13 09:37:42 functional-553391 kubelet[6136]: E1213 09:37:42.224979    6136 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765618662224416061  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164172}  inodes_used:{value:73}}"
	Dec 13 09:37:42 functional-553391 kubelet[6136]: E1213 09:37:42.224999    6136 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765618662224416061  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164172}  inodes_used:{value:73}}"
	Dec 13 09:37:43 functional-553391 kubelet[6136]: E1213 09:37:43.982466    6136 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-553391" containerName="kube-scheduler"
	Dec 13 09:37:44 functional-553391 kubelet[6136]: E1213 09:37:44.000613    6136 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists"
	Dec 13 09:37:44 functional-553391 kubelet[6136]: E1213 09:37:44.000660    6136 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists" pod="kube-system/kube-scheduler-functional-553391"
	Dec 13 09:37:44 functional-553391 kubelet[6136]: E1213 09:37:44.000674    6136 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists" pod="kube-system/kube-scheduler-functional-553391"
	Dec 13 09:37:44 functional-553391 kubelet[6136]: E1213 09:37:44.000716    6136 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-553391" podUID="623733f12fc7a2bd3df192b3433220d0"
	Dec 13 09:37:48 functional-553391 kubelet[6136]: E1213 09:37:48.981659    6136 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-553391" containerName="etcd"
	Dec 13 09:37:52 functional-553391 kubelet[6136]: E1213 09:37:52.229213    6136 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765618672227820928  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164172}  inodes_used:{value:73}}"
	Dec 13 09:37:52 functional-553391 kubelet[6136]: E1213 09:37:52.229279    6136 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765618672227820928  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164172}  inodes_used:{value:73}}"
	Dec 13 09:37:54 functional-553391 kubelet[6136]: E1213 09:37:54.981747    6136 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-553391" containerName="kube-scheduler"
	Dec 13 09:37:54 functional-553391 kubelet[6136]: E1213 09:37:54.997061    6136 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists"
	Dec 13 09:37:54 functional-553391 kubelet[6136]: E1213 09:37:54.997212    6136 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists" pod="kube-system/kube-scheduler-functional-553391"
	Dec 13 09:37:54 functional-553391 kubelet[6136]: E1213 09:37:54.997398    6136 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists" pod="kube-system/kube-scheduler-functional-553391"
	Dec 13 09:37:54 functional-553391 kubelet[6136]: E1213 09:37:54.997606    6136 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-553391" podUID="623733f12fc7a2bd3df192b3433220d0"
	Dec 13 09:37:58 functional-553391 kubelet[6136]: E1213 09:37:58.981606    6136 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-553391" containerName="kube-apiserver"
	Dec 13 09:38:02 functional-553391 kubelet[6136]: E1213 09:38:02.233317    6136 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765618682232301532  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164172}  inodes_used:{value:73}}"
	Dec 13 09:38:02 functional-553391 kubelet[6136]: E1213 09:38:02.233726    6136 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765618682232301532  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164172}  inodes_used:{value:73}}"
	
	
	==> storage-provisioner [4b6d3aa793a5d4bdfe5b2b3732af0374d4cddf7fb3eedc48b5fe0e21ee321de3] <==
	W1213 09:37:38.579946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:37:40.584016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:37:40.589029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:37:42.593029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:37:42.601726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:37:44.604993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:37:44.611553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:37:46.614931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:37:46.621301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:37:48.625174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:37:48.634283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:37:50.638355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:37:50.644441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:37:52.648638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:37:52.654766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:37:54.657944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:37:54.666027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:37:56.671196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:37:56.676517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:37:58.679322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:37:58.688339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:38:00.692621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:38:00.697368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:38:02.702077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:38:02.710764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421] <==
	I1213 09:25:26.201262       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 09:25:26.213115       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 09:25:26.215708       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 09:25:26.220587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:29.676396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:33.936290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:37.534481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:40.587597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:43.611039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:43.616691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:25:43.616926       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 09:25:43.617034       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a807b17-b228-43c8-97ae-e7e16ec2cdf4", APIVersion:"v1", ResourceVersion:"532", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-553391_d215204c-1541-413a-b16f-a41e2460e6c8 became leader
	I1213 09:25:43.617264       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-553391_d215204c-1541-413a-b16f-a41e2460e6c8!
	W1213 09:25:43.620228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:43.629789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:25:43.717650       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-553391_d215204c-1541-413a-b16f-a41e2460e6c8!
	W1213 09:25:45.635212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:45.648126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:47.651491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:47.657022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:49.662658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:49.675580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-553391 -n functional-553391
helpers_test.go:270: (dbg) Run:  kubectl --context functional-553391 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-5758569b79-mc2sr mysql-7d7b65bc95-bmf88 sp-pod dashboard-metrics-scraper-5565989548-kmw97 kubernetes-dashboard-b84665fb8-fphhq
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-553391 describe pod busybox-mount hello-node-5758569b79-mc2sr mysql-7d7b65bc95-bmf88 sp-pod dashboard-metrics-scraper-5565989548-kmw97 kubernetes-dashboard-b84665fb8-fphhq
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-553391 describe pod busybox-mount hello-node-5758569b79-mc2sr mysql-7d7b65bc95-bmf88 sp-pod dashboard-metrics-scraper-5565989548-kmw97 kubernetes-dashboard-b84665fb8-fphhq: exit status 1 (99.906719ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    Environment:  <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-42dlg (ro)
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-42dlg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-5758569b79-mc2sr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8f569 (ro)
	Volumes:
	  kube-api-access-8f569:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             mysql-7d7b65bc95-bmf88
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=mysql
	                  pod-template-hash=7d7b65bc95
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-7d7b65bc95
	Containers:
	  mysql:
	    Image:      public.ecr.aws/docker/library/mysql:8.4
	    Port:       3306/TCP (mysql)
	    Host Port:  0/TCP (mysql)
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q89gb (ro)
	Volumes:
	  kube-api-access-q89gb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Image:        public.ecr.aws/nginx/nginx:alpine
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p9sfg (ro)
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-p9sfg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-kmw97" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-fphhq" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-553391 describe pod busybox-mount hello-node-5758569b79-mc2sr mysql-7d7b65bc95-bmf88 sp-pod dashboard-metrics-scraper-5565989548-kmw97 kubernetes-dashboard-b84665fb8-fphhq: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (368.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (602.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-553391 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-bmf88" [74075963-1216-4f99-a121-261c53b57a2c] Pending
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-553391 -n functional-553391
functional_test.go:1804: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: showing logs for failed pods as of 2025-12-13 09:41:51.488905079 +0000 UTC m=+1822.216865691
functional_test.go:1804: (dbg) Run:  kubectl --context functional-553391 describe po mysql-7d7b65bc95-bmf88 -n default
functional_test.go:1804: (dbg) kubectl --context functional-553391 describe po mysql-7d7b65bc95-bmf88 -n default:
Name:             mysql-7d7b65bc95-bmf88
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=mysql
pod-template-hash=7d7b65bc95
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/mysql-7d7b65bc95
Containers:
mysql:
Image:      public.ecr.aws/docker/library/mysql:8.4
Port:       3306/TCP (mysql)
Host Port:  0/TCP (mysql)
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q89gb (ro)
Volumes:
kube-api-access-q89gb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
functional_test.go:1804: (dbg) Run:  kubectl --context functional-553391 logs mysql-7d7b65bc95-bmf88 -n default
functional_test.go:1804: (dbg) kubectl --context functional-553391 logs mysql-7d7b65bc95-bmf88 -n default:
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-553391 -n functional-553391
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-553391 logs -n 25: (1.55222727s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                    ARGS                                                                     │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-553391 ssh findmnt -T /mount3                                                                                                    │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ mount          │ -p functional-553391 --kill=true                                                                                                            │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ ssh            │ functional-553391 ssh echo hello                                                                                                            │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ ssh            │ functional-553391 ssh cat /etc/hostname                                                                                                     │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │ 13 Dec 25 09:35 UTC │
	│ start          │ -p functional-553391 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ start          │ -p functional-553391 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0           │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ start          │ -p functional-553391 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-553391 --alsologtostderr -v=1                                                                              │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:35 UTC │                     │
	│ addons         │ functional-553391 addons list                                                                                                               │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:38 UTC │ 13 Dec 25 09:38 UTC │
	│ addons         │ functional-553391 addons list -o json                                                                                                       │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:38 UTC │ 13 Dec 25 09:38 UTC │
	│ update-context │ functional-553391 update-context --alsologtostderr -v=2                                                                                     │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:40 UTC │ 13 Dec 25 09:40 UTC │
	│ update-context │ functional-553391 update-context --alsologtostderr -v=2                                                                                     │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:40 UTC │ 13 Dec 25 09:40 UTC │
	│ update-context │ functional-553391 update-context --alsologtostderr -v=2                                                                                     │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:40 UTC │ 13 Dec 25 09:40 UTC │
	│ image          │ functional-553391 image ls --format short --alsologtostderr                                                                                 │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:40 UTC │ 13 Dec 25 09:40 UTC │
	│ image          │ functional-553391 image ls --format yaml --alsologtostderr                                                                                  │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:40 UTC │ 13 Dec 25 09:40 UTC │
	│ ssh            │ functional-553391 ssh pgrep buildkitd                                                                                                       │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:40 UTC │                     │
	│ image          │ functional-553391 image build -t localhost/my-image:functional-553391 testdata/build --alsologtostderr                                      │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:40 UTC │ 13 Dec 25 09:41 UTC │
	│ image          │ functional-553391 image ls                                                                                                                  │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:41 UTC │ 13 Dec 25 09:41 UTC │
	│ image          │ functional-553391 image ls --format json --alsologtostderr                                                                                  │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:41 UTC │ 13 Dec 25 09:41 UTC │
	│ image          │ functional-553391 image ls --format table --alsologtostderr                                                                                 │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:41 UTC │ 13 Dec 25 09:41 UTC │
	│ service        │ functional-553391 service list                                                                                                              │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:41 UTC │ 13 Dec 25 09:41 UTC │
	│ service        │ functional-553391 service list -o json                                                                                                      │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:41 UTC │ 13 Dec 25 09:41 UTC │
	│ service        │ functional-553391 service --namespace=default --https --url hello-node                                                                      │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:41 UTC │                     │
	│ service        │ functional-553391 service hello-node --url --format={{.IP}}                                                                                 │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:41 UTC │                     │
	│ service        │ functional-553391 service hello-node --url                                                                                                  │ functional-553391 │ jenkins │ v1.37.0 │ 13 Dec 25 09:41 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:35:54
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:35:54.159923  403295 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:35:54.160027  403295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:35:54.160032  403295 out.go:374] Setting ErrFile to fd 2...
	I1213 09:35:54.160036  403295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:35:54.160364  403295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 09:35:54.160842  403295 out.go:368] Setting JSON to false
	I1213 09:35:54.161750  403295 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4703,"bootTime":1765613851,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:35:54.161818  403295 start.go:143] virtualization: kvm guest
	I1213 09:35:54.163745  403295 out.go:179] * [functional-553391] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1213 09:35:54.165254  403295 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 09:35:54.165269  403295 notify.go:221] Checking for updates...
	I1213 09:35:54.167675  403295 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:35:54.168945  403295 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 09:35:54.170341  403295 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	I1213 09:35:54.171825  403295 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:35:54.173115  403295 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:35:54.174891  403295 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:35:54.175647  403295 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:35:54.206662  403295 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1213 09:35:54.207891  403295 start.go:309] selected driver: kvm2
	I1213 09:35:54.207911  403295 start.go:927] validating driver "kvm2" against &{Name:functional-553391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-553391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:35:54.208021  403295 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:35:54.210175  403295 out.go:203] 
	W1213 09:35:54.211537  403295 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 09:35:54.212968  403295 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.409753509Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765618912409727485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:189831,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=451c3454-61c9-4f6f-8a3b-a30f7bae1356 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.411145943Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3c09d48-3301-480a-8631-00b0e0dbf462 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.411255213Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3c09d48-3301-480a-8631-00b0e0dbf462 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.411497578Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a033e03e6998717fd54c7a982e213390e2a93a581ab7db6633f97e03610bcce,PodSandboxId:1c66dfad4cada9aca6045bea029b2985bbb8d379f23e2de4ff19f786a4d9e0a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765618055477026408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f8e40cbfe4fe5278f2c2ebdcd0a07ddc2799d2d471fefccfdb255eba910739,PodSandboxId:981ef7045b5a7ba2a152b10d09e8904c2083766c230c9d447a92f2977a4b7b40,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765618055790289701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6d3aa793a5d4bdfe5b2b3732af0374d4cddf7fb3eedc48b5fe0e21ee321de3,PodSandboxId:45411b104740ddd5117e91200d1a690e438ae7a32255f1cb171b6bbbdfbcd892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765618055498638556,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2237eb3cfb9427342a03e244cb23da2440800776c88d39def517cae2df0c9081,PodSandboxId:7e0c763b37cf6447b07c9f367a9d448f4b876618c1146aeac6cd0602ada58d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765618052710769135,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136e051ce4b87ff42f1c7596ef9ad758,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a44d955020176708ebbad74e424fc16d6e74e1600a4aa935cc2ad1fb033d01,PodSandboxId:79e99323f18bbf1152b9765898d58f53eb3c609f3fc414b0680b3cd72d5fb60d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONT
AINER_RUNNING,CreatedAt:1765618052613820224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47ee0aabb1efdd09eec154c30881929f05d58a9520e541025def583ab954d02,PodSandboxId:748de94d0a3965d93b15772c21ed0a143d1dc80ebe6b1cc7b52869edf3fc6961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765618052599039262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9,PodSandboxId:5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765617926050470093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421,PodSandboxId:e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765617926038294787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb,PodSandboxId:235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765617900109965284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad,PodSandboxId:ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765617900149382376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 623733f12fc7a2bd3df192b3433220d0,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695,PodSandboxId:f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765617900061465369,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198
bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c,PodSandboxId:3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765617899883988957,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d3c09d48-3301-480a-8631-00b0e0dbf462 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.443627375Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1fb1a1fc-9c69-4f93-a028-6de16c45512e name=/runtime.v1.RuntimeService/Version
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.443701236Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1fb1a1fc-9c69-4f93-a028-6de16c45512e name=/runtime.v1.RuntimeService/Version
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.445357487Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b019e89-4f68-43e2-8518-62242f32beed name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.446163028Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765618912446135011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:189831,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b019e89-4f68-43e2-8518-62242f32beed name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.447510431Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72a51fdf-f7b5-45d7-813b-da35bd89d02e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.447602590Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72a51fdf-f7b5-45d7-813b-da35bd89d02e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.447828797Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a033e03e6998717fd54c7a982e213390e2a93a581ab7db6633f97e03610bcce,PodSandboxId:1c66dfad4cada9aca6045bea029b2985bbb8d379f23e2de4ff19f786a4d9e0a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765618055477026408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f8e40cbfe4fe5278f2c2ebdcd0a07ddc2799d2d471fefccfdb255eba910739,PodSandboxId:981ef7045b5a7ba2a152b10d09e8904c2083766c230c9d447a92f2977a4b7b40,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765618055790289701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6d3aa793a5d4bdfe5b2b3732af0374d4cddf7fb3eedc48b5fe0e21ee321de3,PodSandboxId:45411b104740ddd5117e91200d1a690e438ae7a32255f1cb171b6bbbdfbcd892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765618055498638556,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2237eb3cfb9427342a03e244cb23da2440800776c88d39def517cae2df0c9081,PodSandboxId:7e0c763b37cf6447b07c9f367a9d448f4b876618c1146aeac6cd0602ada58d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765618052710769135,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136e051ce4b87ff42f1c7596ef9ad758,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a44d955020176708ebbad74e424fc16d6e74e1600a4aa935cc2ad1fb033d01,PodSandboxId:79e99323f18bbf1152b9765898d58f53eb3c609f3fc414b0680b3cd72d5fb60d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONT
AINER_RUNNING,CreatedAt:1765618052613820224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47ee0aabb1efdd09eec154c30881929f05d58a9520e541025def583ab954d02,PodSandboxId:748de94d0a3965d93b15772c21ed0a143d1dc80ebe6b1cc7b52869edf3fc6961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765618052599039262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9,PodSandboxId:5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765617926050470093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421,PodSandboxId:e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765617926038294787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb,PodSandboxId:235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765617900109965284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad,PodSandboxId:ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765617900149382376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 623733f12fc7a2bd3df192b3433220d0,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695,PodSandboxId:f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765617900061465369,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198
bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c,PodSandboxId:3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765617899883988957,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72a51fdf-f7b5-45d7-813b-da35bd89d02e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.483856247Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f2546b50-7c28-4022-87c3-5b6ddac97a94 name=/runtime.v1.RuntimeService/Version
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.483984961Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f2546b50-7c28-4022-87c3-5b6ddac97a94 name=/runtime.v1.RuntimeService/Version
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.485772200Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=493e5081-ed5d-4c00-a12e-ac59ff5627c4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.486523037Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1fd20515-80f1-4b80-822e-7c137013e2ad name=/runtime.v1.RuntimeService/Version
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.486605515Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1fd20515-80f1-4b80-822e-7c137013e2ad name=/runtime.v1.RuntimeService/Version
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.486547852Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765618912486522808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:189831,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=493e5081-ed5d-4c00-a12e-ac59ff5627c4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.487685788Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2efb486-25db-4fd1-8291-571e4823325e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.487750285Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2efb486-25db-4fd1-8291-571e4823325e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.487720703Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd453747-1a0f-44d1-9403-a60f4e314433 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.488035007Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a033e03e6998717fd54c7a982e213390e2a93a581ab7db6633f97e03610bcce,PodSandboxId:1c66dfad4cada9aca6045bea029b2985bbb8d379f23e2de4ff19f786a4d9e0a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765618055477026408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f8e40cbfe4fe5278f2c2ebdcd0a07ddc2799d2d471fefccfdb255eba910739,PodSandboxId:981ef7045b5a7ba2a152b10d09e8904c2083766c230c9d447a92f2977a4b7b40,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765618055790289701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6d3aa793a5d4bdfe5b2b3732af0374d4cddf7fb3eedc48b5fe0e21ee321de3,PodSandboxId:45411b104740ddd5117e91200d1a690e438ae7a32255f1cb171b6bbbdfbcd892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765618055498638556,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2237eb3cfb9427342a03e244cb23da2440800776c88d39def517cae2df0c9081,PodSandboxId:7e0c763b37cf6447b07c9f367a9d448f4b876618c1146aeac6cd0602ada58d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765618052710769135,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136e051ce4b87ff42f1c7596ef9ad758,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a44d955020176708ebbad74e424fc16d6e74e1600a4aa935cc2ad1fb033d01,PodSandboxId:79e99323f18bbf1152b9765898d58f53eb3c609f3fc414b0680b3cd72d5fb60d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONT
AINER_RUNNING,CreatedAt:1765618052613820224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47ee0aabb1efdd09eec154c30881929f05d58a9520e541025def583ab954d02,PodSandboxId:748de94d0a3965d93b15772c21ed0a143d1dc80ebe6b1cc7b52869edf3fc6961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765618052599039262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9,PodSandboxId:5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765617926050470093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421,PodSandboxId:e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765617926038294787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb,PodSandboxId:235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765617900109965284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad,PodSandboxId:ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765617900149382376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 623733f12fc7a2bd3df192b3433220d0,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695,PodSandboxId:f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765617900061465369,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198
bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c,PodSandboxId:3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765617899883988957,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2efb486-25db-4fd1-8291-571e4823325e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.489339575Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765618912489321193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:189831,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd453747-1a0f-44d1-9403-a60f4e314433 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.490268316Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b93c8daa-1fdb-4f87-88b4-d470dba6a4cb name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.490368942Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b93c8daa-1fdb-4f87-88b4-d470dba6a4cb name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 09:41:52 functional-553391 crio[5772]: time="2025-12-13 09:41:52.490622397Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a033e03e6998717fd54c7a982e213390e2a93a581ab7db6633f97e03610bcce,PodSandboxId:1c66dfad4cada9aca6045bea029b2985bbb8d379f23e2de4ff19f786a4d9e0a2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765618055477026408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f8e40cbfe4fe5278f2c2ebdcd0a07ddc2799d2d471fefccfdb255eba910739,PodSandboxId:981ef7045b5a7ba2a152b10d09e8904c2083766c230c9d447a92f2977a4b7b40,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765618055790289701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6d3aa793a5d4bdfe5b2b3732af0374d4cddf7fb3eedc48b5fe0e21ee321de3,PodSandboxId:45411b104740ddd5117e91200d1a690e438ae7a32255f1cb171b6bbbdfbcd892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765618055498638556,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2237eb3cfb9427342a03e244cb23da2440800776c88d39def517cae2df0c9081,PodSandboxId:7e0c763b37cf6447b07c9f367a9d448f4b876618c1146aeac6cd0602ada58d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765618052710769135,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136e051ce4b87ff42f1c7596ef9ad758,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a44d955020176708ebbad74e424fc16d6e74e1600a4aa935cc2ad1fb033d01,PodSandboxId:79e99323f18bbf1152b9765898d58f53eb3c609f3fc414b0680b3cd72d5fb60d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONT
AINER_RUNNING,CreatedAt:1765618052613820224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47ee0aabb1efdd09eec154c30881929f05d58a9520e541025def583ab954d02,PodSandboxId:748de94d0a3965d93b15772c21ed0a143d1dc80ebe6b1cc7b52869edf3fc6961,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765618052599039262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9,PodSandboxId:5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765617926050470093,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-rjg8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474b0e4e-417c-49da-b863-8950ea9eb75f,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421,PodSandboxId:e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765617926038294787,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b1284e2-956a-4e4a-b504-57f20fa9a365,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb,PodSandboxId:235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765617900109965284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9423f6b5da5b329cef63430d36acee6e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad,PodSandboxId:ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765617900149382376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 623733f12fc7a2bd3df192b3433220d0,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695,PodSandboxId:f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765617900061465369,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-553391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a819a5b5d8a1acac4ff9198
bf329d816,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c,PodSandboxId:3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765617899883988957,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmxbh,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b03d972-1560-487e-8c23-357ba0a288ce,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b93c8daa-1fdb-4f87-88b4-d470dba6a4cb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f0f8e40cbfe4f       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139   14 minutes ago      Running             coredns                   2                   981ef7045b5a7       coredns-7d764666f9-rjg8z                    kube-system
	4b6d3aa793a5d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       3                   45411b104740d       storage-provisioner                         kube-system
	4a033e03e6998       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   14 minutes ago      Running             kube-proxy                2                   1c66dfad4cada       kube-proxy-nmxbh                            kube-system
	2237eb3cfb942       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   14 minutes ago      Running             kube-apiserver            0                   7e0c763b37cf6       kube-apiserver-functional-553391            kube-system
	17a44d9550201       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   14 minutes ago      Running             etcd                      2                   79e99323f18bb       etcd-functional-553391                      kube-system
	c47ee0aabb1ef       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   14 minutes ago      Running             kube-controller-manager   2                   748de94d0a396       kube-controller-manager-functional-553391   kube-system
	74dbec4b8aef1       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139   16 minutes ago      Exited              coredns                   1                   5e9144b389dce       coredns-7d764666f9-rjg8z                    kube-system
	b9244bb17b848       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Exited              storage-provisioner       2                   e6342f727896e       storage-provisioner                         kube-system
	15ccf55277802       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   16 minutes ago      Exited              kube-scheduler            1                   ad35e4e969107       kube-scheduler-functional-553391            kube-system
	43e2ebf7101ff       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   16 minutes ago      Exited              etcd                      1                   235bdf467969a       etcd-functional-553391                      kube-system
	bef6c74863ea7       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   16 minutes ago      Exited              kube-controller-manager   1                   f1efa84ba4774       kube-controller-manager-functional-553391   kube-system
	a7de46befbc34       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   16 minutes ago      Exited              kube-proxy                1                   3c085dc8222fb       kube-proxy-nmxbh                            kube-system
	
	
	==> coredns [74dbec4b8aef1a21cd0d8a33ea1dcdf6703376e2fac9decc7229c4653835a2e9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:38849 - 10587 "HINFO IN 1179697731504859025.6635090832342881038. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0581936s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f0f8e40cbfe4fe5278f2c2ebdcd0a07ddc2799d2d471fefccfdb255eba910739] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:34740 - 64681 "HINFO IN 9182635211618943717.7247219486871022041. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.096462358s
	
	
	==> describe nodes <==
	Name:               functional-553391
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-553391
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
	                    minikube.k8s.io/name=functional-553391
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_24_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:23:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-553391
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:41:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:41:11 +0000   Sat, 13 Dec 2025 09:23:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:41:11 +0000   Sat, 13 Dec 2025 09:23:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:41:11 +0000   Sat, 13 Dec 2025 09:23:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:41:11 +0000   Sat, 13 Dec 2025 09:24:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    functional-553391
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 fdf3a8f76472433fb21c3307ef40831b
	  System UUID:                fdf3a8f7-6472-433f-b21c-3307ef40831b
	  Boot ID:                    3c8d40c0-0e2d-4a05-9897-d24bc6cacbb9
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-rjg8z                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     17m
	  kube-system                 etcd-functional-553391                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         17m
	  kube-system                 kube-apiserver-functional-553391             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-functional-553391    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-nmxbh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-functional-553391             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  17m   node-controller  Node functional-553391 event: Registered Node functional-553391 in Controller
	  Normal  RegisteredNode  16m   node-controller  Node functional-553391 event: Registered Node functional-553391 in Controller
	  Normal  RegisteredNode  14m   node-controller  Node functional-553391 event: Registered Node functional-553391 in Controller
	
	
	==> dmesg <==
	[Dec13 09:23] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000049] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004691] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.166245] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084827] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.098140] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.130337] kauditd_printk_skb: 171 callbacks suppressed
	[Dec13 09:24] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.600381] kauditd_printk_skb: 248 callbacks suppressed
	[ +35.932646] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 09:25] kauditd_printk_skb: 356 callbacks suppressed
	[  +1.617310] kauditd_printk_skb: 42 callbacks suppressed
	[ +12.973574] kauditd_printk_skb: 12 callbacks suppressed
	[Dec13 09:27] kauditd_printk_skb: 209 callbacks suppressed
	[  +3.605359] kauditd_printk_skb: 153 callbacks suppressed
	[  +6.110507] kauditd_printk_skb: 133 callbacks suppressed
	[Dec13 09:31] kauditd_printk_skb: 36 callbacks suppressed
	[Dec13 09:35] kauditd_printk_skb: 2 callbacks suppressed
	[Dec13 09:38] kauditd_printk_skb: 2 callbacks suppressed
	[Dec13 09:40] crun[9283]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[Dec13 09:41] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [17a44d955020176708ebbad74e424fc16d6e74e1600a4aa935cc2ad1fb033d01] <==
	{"level":"warn","ts":"2025-12-13T09:27:33.755682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.771997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.780019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.785864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.793594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.801980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.809713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.823612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.829416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.840008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.850256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.858039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.864984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.872083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.891523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.905524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.916312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.922300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.931561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.941621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.945560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:27:33.993699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50688","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T09:37:33.367003Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":959}
	{"level":"info","ts":"2025-12-13T09:37:33.377296Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":959,"took":"9.242015ms","hash":2537724432,"current-db-size-bytes":2850816,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2850816,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2025-12-13T09:37:33.377568Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2537724432,"revision":959,"compact-revision":-1}
	
	
	==> etcd [43e2ebf7101ff7d080d10386c7911c24013836e1849ba2237f3c3ccf2a0697bb] <==
	{"level":"warn","ts":"2025-12-13T09:25:23.910219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:23.923853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:23.927780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:23.936775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:23.944573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:23.953555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:25:24.035457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36150","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T09:25:51.349706Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T09:25:51.349772Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-553391","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.38:2380"],"advertise-client-urls":["https://192.168.39.38:2379"]}
	{"level":"error","ts":"2025-12-13T09:25:51.357195Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T09:25:51.442258Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T09:25:51.443776Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:25:51.443862Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"38b26e584d45e0da","current-leader-member-id":"38b26e584d45e0da"}
	{"level":"warn","ts":"2025-12-13T09:25:51.443944Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T09:25:51.444031Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T09:25:51.444041Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:25:51.444058Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-13T09:25:51.444015Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-13T09:25:51.444108Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.38:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T09:25:51.444134Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.38:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T09:25:51.444142Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.38:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:25:51.447967Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.38:2380"}
	{"level":"error","ts":"2025-12-13T09:25:51.448071Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.38:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T09:25:51.448106Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.38:2380"}
	{"level":"info","ts":"2025-12-13T09:25:51.448114Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-553391","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.38:2380"],"advertise-client-urls":["https://192.168.39.38:2379"]}
	
	
	==> kernel <==
	 09:41:52 up 18 min,  0 users,  load average: 0.27, 0.22, 0.18
	Linux functional-553391 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [2237eb3cfb9427342a03e244cb23da2440800776c88d39def517cae2df0c9081] <==
	I1213 09:27:34.752018       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 09:27:34.752063       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1213 09:27:34.752152       1 aggregator.go:187] initial CRD sync complete...
	I1213 09:27:34.752161       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 09:27:34.752166       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 09:27:34.752170       1 cache.go:39] Caches are synced for autoregister controller
	I1213 09:27:34.755443       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 09:27:34.768964       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:27:34.989792       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 09:27:35.552170       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1213 09:27:36.683227       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:27:36.740625       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 09:27:36.773702       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:27:36.784227       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:27:38.099192       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:27:38.250187       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 09:27:38.348439       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:31:44.252988       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.31.242"}
	I1213 09:31:47.762685       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.66.94"}
	I1213 09:31:51.195438       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.98.33.51"}
	I1213 09:35:55.088227       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 09:35:55.327550       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.202.53"}
	I1213 09:35:55.349609       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.109.158"}
	I1213 09:37:34.662190       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:38:04.053710       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.57.7"}
	
	
	==> kube-controller-manager [bef6c74863ea74a437871b1d75d0adf6fdd2e13050bd9f5fda41a413df2ad695] <==
	I1213 09:25:27.972668       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.974789       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.972832       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.974966       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.975360       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.972929       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.977747       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.977858       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.977973       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.978099       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.978132       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.978207       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.978922       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.979032       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.979085       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.984323       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.984406       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.984450       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.984501       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:27.988096       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:28.008019       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:28.066414       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:28.084492       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:28.084512       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 09:25:28.084517       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-controller-manager [c47ee0aabb1efdd09eec154c30881929f05d58a9520e541025def583ab954d02] <==
	I1213 09:27:37.929063       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.959660       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.959721       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.959741       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.959800       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.960015       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.960080       1 range_allocator.go:177] "Sending events to api server"
	I1213 09:27:37.960131       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1213 09:27:37.960137       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:27:37.960141       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.959682       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.960314       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1213 09:27:37.960382       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-553391"
	I1213 09:27:37.960424       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1213 09:27:37.968967       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.971598       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:37.972793       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 09:27:37.972804       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1213 09:27:37.986319       1 shared_informer.go:377] "Caches are synced"
	E1213 09:35:55.174531       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:55.184739       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:55.198136       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:55.216383       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:55.221609       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1213 09:35:55.233963       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [4a033e03e6998717fd54c7a982e213390e2a93a581ab7db6633f97e03610bcce] <==
	I1213 09:27:36.194743       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:27:36.296159       1 shared_informer.go:377] "Caches are synced"
	I1213 09:27:36.299041       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.38"]
	E1213 09:27:36.299525       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:27:36.369439       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 09:27:36.369524       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 09:27:36.369545       1 server_linux.go:136] "Using iptables Proxier"
	I1213 09:27:36.381798       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:27:36.382110       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 09:27:36.382141       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:27:36.388769       1 config.go:200] "Starting service config controller"
	I1213 09:27:36.389028       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:27:36.389070       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:27:36.389076       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:27:36.389235       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:27:36.389535       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:27:36.390638       1 config.go:309] "Starting node config controller"
	I1213 09:27:36.390719       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:27:36.489796       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:27:36.489917       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:27:36.489987       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 09:27:36.491158       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [a7de46befbc341adcce80258301d16635ae42bacd96697be843a3ea37de3097c] <==
	I1213 09:25:00.988777       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:25:25.893454       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:25.893556       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.38"]
	E1213 09:25:25.893655       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:25:25.940800       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 09:25:25.940970       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 09:25:25.940997       1 server_linux.go:136] "Using iptables Proxier"
	I1213 09:25:25.950368       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:25:25.950781       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 09:25:25.950797       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:25:25.953179       1 config.go:200] "Starting service config controller"
	I1213 09:25:25.954715       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:25:25.953595       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:25:25.955069       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:25:25.953610       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:25:25.955255       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:25:25.959118       1 config.go:309] "Starting node config controller"
	I1213 09:25:25.959146       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:25:26.055988       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:25:26.056061       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:25:26.056143       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 09:25:26.060009       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [15ccf55277802f6d84e33d122e3b923248435c477a7d8c5e6abca019214467ad] <==
	E1213 09:25:24.739405       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1213 09:25:24.739461       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1213 09:25:24.739526       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1213 09:25:24.739572       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1213 09:25:24.739642       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1213 09:25:24.739677       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1213 09:25:24.739770       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1213 09:25:24.739826       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1213 09:25:24.740002       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1213 09:25:24.740105       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1213 09:25:24.740186       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1213 09:25:24.740250       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1213 09:25:24.740316       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1213 09:25:24.740394       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1213 09:25:24.740516       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1213 09:25:24.740571       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1213 09:25:24.740616       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1213 09:25:24.740653       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1213 09:25:27.991267       1 shared_informer.go:377] "Caches are synced"
	I1213 09:25:51.355821       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 09:25:51.365725       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 09:25:51.365754       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:25:51.368478       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 09:25:51.368492       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 09:25:51.368520       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 13 09:41:28 functional-553391 kubelet[6136]: E1213 09:41:28.996146    6136 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-553391" podUID="623733f12fc7a2bd3df192b3433220d0"
	Dec 13 09:41:32 functional-553391 kubelet[6136]: E1213 09:41:32.066513    6136 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod623733f12fc7a2bd3df192b3433220d0/crio-ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563: Error finding container ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563: Status 404 returned error can't find the container with id ad35e4e9691075c9ec6e8d7f1842bc656bd2d3fc305cb40f78320be06d05e563
	Dec 13 09:41:32 functional-553391 kubelet[6136]: E1213 09:41:32.066832    6136 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod9423f6b5da5b329cef63430d36acee6e/crio-235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63: Error finding container 235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63: Status 404 returned error can't find the container with id 235bdf467969af32577d0402d1c26dbaa02b168f83902d82024b9484fa704e63
	Dec 13 09:41:32 functional-553391 kubelet[6136]: E1213 09:41:32.067250    6136 manager.go:1119] Failed to create existing container: /kubepods/burstable/poda819a5b5d8a1acac4ff9198bf329d816/crio-f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f: Error finding container f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f: Status 404 returned error can't find the container with id f1efa84ba477462501f24ab4ce10544e1def90c12e260715f5392f8129ccbf5f
	Dec 13 09:41:32 functional-553391 kubelet[6136]: E1213 09:41:32.067568    6136 manager.go:1119] Failed to create existing container: /kubepods/besteffort/pod4b1284e2-956a-4e4a-b504-57f20fa9a365/crio-e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9: Error finding container e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9: Status 404 returned error can't find the container with id e6342f727896e432c490e632c9755f1679b02fab10e6d297e644839c8d4d7cb9
	Dec 13 09:41:32 functional-553391 kubelet[6136]: E1213 09:41:32.067967    6136 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod474b0e4e-417c-49da-b863-8950ea9eb75f/crio-5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a: Error finding container 5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a: Status 404 returned error can't find the container with id 5e9144b389dce1dd3d416188988aa991a8d750e08b7fb3adbf78f24eca9c973a
	Dec 13 09:41:32 functional-553391 kubelet[6136]: E1213 09:41:32.068367    6136 manager.go:1119] Failed to create existing container: /kubepods/besteffort/pod5b03d972-1560-487e-8c23-357ba0a288ce/crio-3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b: Error finding container 3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b: Status 404 returned error can't find the container with id 3c085dc8222fb8456c9f36193cce150826e12aca23bda60fede0bb6b4fcead2b
	Dec 13 09:41:32 functional-553391 kubelet[6136]: E1213 09:41:32.280855    6136 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765618892279699669  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189831}  inodes_used:{value:89}}"
	Dec 13 09:41:32 functional-553391 kubelet[6136]: E1213 09:41:32.280950    6136 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765618892279699669  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189831}  inodes_used:{value:89}}"
	Dec 13 09:41:36 functional-553391 kubelet[6136]: E1213 09:41:36.982324    6136 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-553391" containerName="etcd"
	Dec 13 09:41:39 functional-553391 kubelet[6136]: E1213 09:41:39.982372    6136 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-553391" containerName="kube-scheduler"
	Dec 13 09:41:39 functional-553391 kubelet[6136]: E1213 09:41:39.992807    6136 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists"
	Dec 13 09:41:39 functional-553391 kubelet[6136]: E1213 09:41:39.992845    6136 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists" pod="kube-system/kube-scheduler-functional-553391"
	Dec 13 09:41:39 functional-553391 kubelet[6136]: E1213 09:41:39.992860    6136 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists" pod="kube-system/kube-scheduler-functional-553391"
	Dec 13 09:41:39 functional-553391 kubelet[6136]: E1213 09:41:39.992974    6136 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-553391" podUID="623733f12fc7a2bd3df192b3433220d0"
	Dec 13 09:41:42 functional-553391 kubelet[6136]: E1213 09:41:42.283248    6136 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765618902282942988  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189831}  inodes_used:{value:89}}"
	Dec 13 09:41:42 functional-553391 kubelet[6136]: E1213 09:41:42.283268    6136 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765618902282942988  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189831}  inodes_used:{value:89}}"
	Dec 13 09:41:47 functional-553391 kubelet[6136]: E1213 09:41:47.982254    6136 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-553391" containerName="kube-controller-manager"
	Dec 13 09:41:51 functional-553391 kubelet[6136]: E1213 09:41:51.984076    6136 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-553391" containerName="kube-scheduler"
	Dec 13 09:41:51 functional-553391 kubelet[6136]: E1213 09:41:51.998244    6136 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists"
	Dec 13 09:41:51 functional-553391 kubelet[6136]: E1213 09:41:51.998310    6136 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists" pod="kube-system/kube-scheduler-functional-553391"
	Dec 13 09:41:51 functional-553391 kubelet[6136]: E1213 09:41:51.998327    6136 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\" already exists" pod="kube-system/kube-scheduler-functional-553391"
	Dec 13 09:41:51 functional-553391 kubelet[6136]: E1213 09:41:51.998379    6136 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-553391_kube-system(623733f12fc7a2bd3df192b3433220d0)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-553391_kube-system_623733f12fc7a2bd3df192b3433220d0_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-553391" podUID="623733f12fc7a2bd3df192b3433220d0"
	Dec 13 09:41:52 functional-553391 kubelet[6136]: E1213 09:41:52.287804    6136 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765618912286695930  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189831}  inodes_used:{value:89}}"
	Dec 13 09:41:52 functional-553391 kubelet[6136]: E1213 09:41:52.287827    6136 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765618912286695930  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189831}  inodes_used:{value:89}}"
	
	
	==> storage-provisioner [4b6d3aa793a5d4bdfe5b2b3732af0374d4cddf7fb3eedc48b5fe0e21ee321de3] <==
	W1213 09:41:27.797101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:29.800666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:29.810398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:31.813968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:31.825418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:33.828938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:33.836771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:35.840864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:35.846670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:37.851216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:37.860081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:39.863057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:39.868659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:41.872697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:41.881198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:43.885095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:43.890096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:45.893828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:45.899961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:47.903520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:47.917847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:49.921694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:49.931713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:51.936985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:41:51.957517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b9244bb17b8485f84b8f76fd8b0e1a25b8f2e3ef5cf5a3f95f1be4666b075421] <==
	I1213 09:25:26.201262       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 09:25:26.213115       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 09:25:26.215708       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 09:25:26.220587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:29.676396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:33.936290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:37.534481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:40.587597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:43.611039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:43.616691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:25:43.616926       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 09:25:43.617034       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a807b17-b228-43c8-97ae-e7e16ec2cdf4", APIVersion:"v1", ResourceVersion:"532", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-553391_d215204c-1541-413a-b16f-a41e2460e6c8 became leader
	I1213 09:25:43.617264       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-553391_d215204c-1541-413a-b16f-a41e2460e6c8!
	W1213 09:25:43.620228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:43.629789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:25:43.717650       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-553391_d215204c-1541-413a-b16f-a41e2460e6c8!
	W1213 09:25:45.635212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:45.648126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:47.651491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:47.657022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:49.662658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:25:49.675580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-553391 -n functional-553391
helpers_test.go:270: (dbg) Run:  kubectl --context functional-553391 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-5758569b79-mc2sr hello-node-connect-9f67c86d4-5k96g mysql-7d7b65bc95-bmf88 sp-pod dashboard-metrics-scraper-5565989548-kmw97 kubernetes-dashboard-b84665fb8-fphhq
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-553391 describe pod busybox-mount hello-node-5758569b79-mc2sr hello-node-connect-9f67c86d4-5k96g mysql-7d7b65bc95-bmf88 sp-pod dashboard-metrics-scraper-5565989548-kmw97 kubernetes-dashboard-b84665fb8-fphhq
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-553391 describe pod busybox-mount hello-node-5758569b79-mc2sr hello-node-connect-9f67c86d4-5k96g mysql-7d7b65bc95-bmf88 sp-pod dashboard-metrics-scraper-5565989548-kmw97 kubernetes-dashboard-b84665fb8-fphhq: exit status 1 (104.447872ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    Environment:  <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-42dlg (ro)
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-42dlg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-5758569b79-mc2sr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8f569 (ro)
	Volumes:
	  kube-api-access-8f569:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-connect-9f67c86d4-5k96g
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lv7bb (ro)
	Volumes:
	  kube-api-access-lv7bb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             mysql-7d7b65bc95-bmf88
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=mysql
	                  pod-template-hash=7d7b65bc95
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-7d7b65bc95
	Containers:
	  mysql:
	    Image:      public.ecr.aws/docker/library/mysql:8.4
	    Port:       3306/TCP (mysql)
	    Host Port:  0/TCP (mysql)
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q89gb (ro)
	Volumes:
	  kube-api-access-q89gb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Image:        public.ecr.aws/nginx/nginx:alpine
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p9sfg (ro)
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-p9sfg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-kmw97" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-fphhq" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-553391 describe pod busybox-mount hello-node-5758569b79-mc2sr hello-node-connect-9f67c86d4-5k96g mysql-7d7b65bc95-bmf88 sp-pod dashboard-metrics-scraper-5565989548-kmw97 kubernetes-dashboard-b84665fb8-fphhq: exit status 1
E1213 09:42:37.813484  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:43:56.551214  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:44:00.883666  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:47:37.813497  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (602.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-553391 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-553391 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-mc2sr" [34ac5109-2776-4e52-8b5f-7c7c9bae4f22] Pending
functional_test.go:1460: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-553391 -n functional-553391
functional_test.go:1460: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-13 09:41:48.006911611 +0000 UTC m=+1818.734872221
functional_test.go:1460: (dbg) Run:  kubectl --context functional-553391 describe po hello-node-5758569b79-mc2sr -n default
functional_test.go:1460: (dbg) kubectl --context functional-553391 describe po hello-node-5758569b79-mc2sr -n default:
Name:             hello-node-5758569b79-mc2sr
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=hello-node
pod-template-hash=5758569b79
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/hello-node-5758569b79
Containers:
echo-server:
Image:        kicbase/echo-server
Port:         <none>
Host Port:    <none>
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8f569 (ro)
Volumes:
kube-api-access-8f569:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
functional_test.go:1460: (dbg) Run:  kubectl --context functional-553391 logs hello-node-5758569b79-mc2sr -n default
functional_test.go:1460: (dbg) kubectl --context functional-553391 logs hello-node-5758569b79-mc2sr -n default:
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (241.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2488081652/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765618309150067471" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2488081652/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765618309150067471" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2488081652/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765618309150067471" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2488081652/001/test-1765618309150067471
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553391 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (170.410126ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 09:31:49.321051  391877 retry.go:31] will retry after 285.07293ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 09:31 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 09:31 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 09:31 test-1765618309150067471
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh cat /mount-9p/test-1765618309150067471
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-553391 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [d17d45a8-bf89-4649-b240-12b7915f0fce] Pending
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: WARNING: pod list for "default" "integration-test=busybox-mount" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_mount_test.go:153: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: pod "integration-test=busybox-mount" failed to start within 4m0s: context deadline exceeded ****
functional_test_mount_test.go:153: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-553391 -n functional-553391
functional_test_mount_test.go:153: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: showing logs for failed pods as of 2025-12-13 09:35:50.535745973 +0000 UTC m=+1461.263706574
functional_test_mount_test.go:153: (dbg) Run:  kubectl --context functional-553391 describe po busybox-mount -n default
functional_test_mount_test.go:153: (dbg) kubectl --context functional-553391 describe po busybox-mount -n default:
Name:             busybox-mount
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           integration-test=busybox-mount
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
mount-munger:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
Environment:  <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-42dlg (ro)
Volumes:
test-volume:
Type:          HostPath (bare host directory volume)
Path:          /mount-9p
HostPathType:  
kube-api-access-42dlg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
functional_test_mount_test.go:153: (dbg) Run:  kubectl --context functional-553391 logs busybox-mount -n default
functional_test_mount_test.go:153: (dbg) kubectl --context functional-553391 logs busybox-mount -n default:
functional_test_mount_test.go:154: failed waiting for busybox-mount pod: integration-test=busybox-mount within 4m0s: context deadline exceeded
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553391 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (166.306808ms)

                                                
                                                
-- stdout --
	192.168.39.1 on /mount-9p type 9p (rw,relatime,dfltuid=1000,dfltgid=1000,access=any,msize=262144,trans=tcp,noextend,port=40489)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 13 09:31 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 13 09:31 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 13 09:31 test-1765618309150067471
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-553391 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2488081652/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2488081652/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2488081652/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.39.1:40489
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2488081652/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2488081652/001:/mount-9p --alsologtostderr -v=1] stderr:
I1213 09:31:49.214309  401844 out.go:360] Setting OutFile to fd 1 ...
I1213 09:31:49.214505  401844 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:31:49.214520  401844 out.go:374] Setting ErrFile to fd 2...
I1213 09:31:49.214526  401844 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:31:49.214839  401844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
I1213 09:31:49.215166  401844 mustload.go:66] Loading cluster: functional-553391
I1213 09:31:49.215533  401844 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 09:31:49.217607  401844 host.go:66] Checking if "functional-553391" exists ...
I1213 09:31:49.220787  401844 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
I1213 09:31:49.221275  401844 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
I1213 09:31:49.221304  401844 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
I1213 09:31:49.224352  401844 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2488081652/001 into VM as /mount-9p ...
I1213 09:31:49.225757  401844 out.go:179]   - Mount type:   9p
I1213 09:31:49.227022  401844 out.go:179]   - User ID:      docker
I1213 09:31:49.228442  401844 out.go:179]   - Group ID:     docker
I1213 09:31:49.229664  401844 out.go:179]   - Version:      9p2000.L
I1213 09:31:49.231017  401844 out.go:179]   - Message Size: 262144
I1213 09:31:49.233183  401844 out.go:179]   - Options:      map[]
I1213 09:31:49.234660  401844 out.go:179]   - Bind Address: 192.168.39.1:40489
I1213 09:31:49.235953  401844 out.go:179] * Userspace file server: 
I1213 09:31:49.236139  401844 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1213 09:31:49.239136  401844 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
I1213 09:31:49.239603  401844 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
I1213 09:31:49.239638  401844 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
I1213 09:31:49.239796  401844 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/functional-553391/id_rsa Username:docker}
I1213 09:31:49.330683  401844 mount.go:180] unmount for /mount-9p ran successfully
I1213 09:31:49.330712  401844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1213 09:31:49.348126  401844 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=40489,trans=tcp,version=9p2000.L 192.168.39.1 /mount-9p"
I1213 09:31:49.402790  401844 main.go:127] stdlog: ufs.go:141 connected
I1213 09:31:49.403013  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tversion tag 65535 msize 262144 version '9P2000.L'
I1213 09:31:49.403102  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rversion tag 65535 msize 262144 version '9P2000'
I1213 09:31:49.404747  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1213 09:31:49.404842  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rattach tag 0 aqid (20fa30b 170d581c 'd')
I1213 09:31:49.405995  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 0
I1213 09:31:49.406134  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa30b 170d581c 'd') m d775 at 0 mt 1765618309 l 4096 t 0 d 0 ext )
I1213 09:31:49.406467  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 0
I1213 09:31:49.406564  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa30b 170d581c 'd') m d775 at 0 mt 1765618309 l 4096 t 0 d 0 ext )
I1213 09:31:49.409012  401844 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/.mount-process: {Name:mk9228d9c0f485b03a28a166be6fba8f908c45e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 09:31:49.409233  401844 mount.go:105] mount successful: ""
I1213 09:31:49.413214  401844 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2488081652/001 to /mount-9p
I1213 09:31:49.414644  401844 out.go:203] 
I1213 09:31:49.415924  401844 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1213 09:31:49.953912  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 0
I1213 09:31:49.954080  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa30b 170d581c 'd') m d775 at 0 mt 1765618309 l 4096 t 0 d 0 ext )
I1213 09:31:49.956005  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Twalk tag 0 fid 0 newfid 1 
I1213 09:31:49.956057  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rwalk tag 0 
I1213 09:31:49.956262  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Topen tag 0 fid 1 mode 0
I1213 09:31:49.956355  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Ropen tag 0 qid (20fa30b 170d581c 'd') iounit 0
I1213 09:31:49.956668  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 0
I1213 09:31:49.956780  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa30b 170d581c 'd') m d775 at 0 mt 1765618309 l 4096 t 0 d 0 ext )
I1213 09:31:49.957011  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tread tag 0 fid 1 offset 0 count 262120
I1213 09:31:49.957179  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rread tag 0 count 258
I1213 09:31:49.957387  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tread tag 0 fid 1 offset 258 count 261862
I1213 09:31:49.957412  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rread tag 0 count 0
I1213 09:31:49.957654  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tread tag 0 fid 1 offset 258 count 262120
I1213 09:31:49.957675  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rread tag 0 count 0
I1213 09:31:49.957909  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1213 09:31:49.957934  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rwalk tag 0 (20fa30d 170d581c '') 
I1213 09:31:49.958441  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 2
I1213 09:31:49.958516  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa30d 170d581c '') m 644 at 0 mt 1765618309 l 24 t 0 d 0 ext )
I1213 09:31:49.958971  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 2
I1213 09:31:49.959050  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa30d 170d581c '') m 644 at 0 mt 1765618309 l 24 t 0 d 0 ext )
I1213 09:31:49.959269  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tclunk tag 0 fid 2
I1213 09:31:49.959292  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rclunk tag 0
I1213 09:31:49.959594  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1213 09:31:49.959659  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rwalk tag 0 (20fa30d 170d581c '') 
I1213 09:31:49.960023  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 2
I1213 09:31:49.960097  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa30d 170d581c '') m 644 at 0 mt 1765618309 l 24 t 0 d 0 ext )
I1213 09:31:49.960373  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tclunk tag 0 fid 2
I1213 09:31:49.960408  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rclunk tag 0
I1213 09:31:49.960654  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Twalk tag 0 fid 0 newfid 2 0:'test-1765618309150067471' 
I1213 09:31:49.960689  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rwalk tag 0 (20fa30e 170d581c '') 
I1213 09:31:49.961123  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 2
I1213 09:31:49.961213  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('test-1765618309150067471' 'jenkins' 'balintp' '' q (20fa30e 170d581c '') m 644 at 0 mt 1765618309 l 24 t 0 d 0 ext )
I1213 09:31:49.961556  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 2
I1213 09:31:49.961661  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('test-1765618309150067471' 'jenkins' 'balintp' '' q (20fa30e 170d581c '') m 644 at 0 mt 1765618309 l 24 t 0 d 0 ext )
I1213 09:31:49.962045  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tclunk tag 0 fid 2
I1213 09:31:49.962068  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rclunk tag 0
I1213 09:31:49.962277  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Twalk tag 0 fid 0 newfid 2 0:'test-1765618309150067471' 
I1213 09:31:49.962306  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rwalk tag 0 (20fa30e 170d581c '') 
I1213 09:31:49.962557  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 2
I1213 09:31:49.962674  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('test-1765618309150067471' 'jenkins' 'balintp' '' q (20fa30e 170d581c '') m 644 at 0 mt 1765618309 l 24 t 0 d 0 ext )
I1213 09:31:49.962864  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tclunk tag 0 fid 2
I1213 09:31:49.962884  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rclunk tag 0
I1213 09:31:49.963125  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1213 09:31:49.963176  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rwalk tag 0 (20fa30c 170d581c '') 
I1213 09:31:49.963343  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 2
I1213 09:31:49.963427  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa30c 170d581c '') m 644 at 0 mt 1765618309 l 24 t 0 d 0 ext )
I1213 09:31:49.963706  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 2
I1213 09:31:49.963859  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa30c 170d581c '') m 644 at 0 mt 1765618309 l 24 t 0 d 0 ext )
I1213 09:31:49.964049  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tclunk tag 0 fid 2
I1213 09:31:49.964071  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rclunk tag 0
I1213 09:31:49.964601  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1213 09:31:49.964652  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rwalk tag 0 (20fa30c 170d581c '') 
I1213 09:31:49.964877  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 2
I1213 09:31:49.964946  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa30c 170d581c '') m 644 at 0 mt 1765618309 l 24 t 0 d 0 ext )
I1213 09:31:49.965161  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tclunk tag 0 fid 2
I1213 09:31:49.965193  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rclunk tag 0
I1213 09:31:49.965535  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tread tag 0 fid 1 offset 258 count 262120
I1213 09:31:49.965561  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rread tag 0 count 0
I1213 09:31:49.965812  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tclunk tag 0 fid 1
I1213 09:31:49.965870  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rclunk tag 0
I1213 09:31:50.134307  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Twalk tag 0 fid 0 newfid 1 0:'test-1765618309150067471' 
I1213 09:31:50.134433  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rwalk tag 0 (20fa30e 170d581c '') 
I1213 09:31:50.134805  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 1
I1213 09:31:50.134965  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('test-1765618309150067471' 'jenkins' 'balintp' '' q (20fa30e 170d581c '') m 644 at 0 mt 1765618309 l 24 t 0 d 0 ext )
I1213 09:31:50.135339  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Twalk tag 0 fid 1 newfid 2 
I1213 09:31:50.135388  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rwalk tag 0 
I1213 09:31:50.135643  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Topen tag 0 fid 2 mode 0
I1213 09:31:50.135716  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Ropen tag 0 qid (20fa30e 170d581c '') iounit 0
I1213 09:31:50.136000  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 1
I1213 09:31:50.136105  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('test-1765618309150067471' 'jenkins' 'balintp' '' q (20fa30e 170d581c '') m 644 at 0 mt 1765618309 l 24 t 0 d 0 ext )
I1213 09:31:50.136405  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tread tag 0 fid 2 offset 0 count 262120
I1213 09:31:50.136477  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rread tag 0 count 24
I1213 09:31:50.136831  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tread tag 0 fid 2 offset 24 count 262120
I1213 09:31:50.136880  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rread tag 0 count 0
I1213 09:31:50.137127  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tread tag 0 fid 2 offset 24 count 262120
I1213 09:31:50.137165  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rread tag 0 count 0
I1213 09:31:50.137574  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tclunk tag 0 fid 2
I1213 09:31:50.137633  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rclunk tag 0
I1213 09:31:50.137886  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tclunk tag 0 fid 1
I1213 09:31:50.137916  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rclunk tag 0
I1213 09:35:50.815508  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 0
I1213 09:35:50.816453  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa30b 170d581c 'd') m d775 at 0 mt 1765618309 l 4096 t 0 d 0 ext )
I1213 09:35:50.818755  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Twalk tag 0 fid 0 newfid 1 
I1213 09:35:50.818828  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rwalk tag 0 
I1213 09:35:50.819289  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Topen tag 0 fid 1 mode 0
I1213 09:35:50.819596  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Ropen tag 0 qid (20fa30b 170d581c 'd') iounit 0
I1213 09:35:50.819891  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 0
I1213 09:35:50.820027  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa30b 170d581c 'd') m d775 at 0 mt 1765618309 l 4096 t 0 d 0 ext )
I1213 09:35:50.820395  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tread tag 0 fid 1 offset 0 count 262120
I1213 09:35:50.820622  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rread tag 0 count 258
I1213 09:35:50.820919  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tread tag 0 fid 1 offset 258 count 261862
I1213 09:35:50.820961  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rread tag 0 count 0
I1213 09:35:50.821267  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tread tag 0 fid 1 offset 258 count 262120
I1213 09:35:50.821312  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rread tag 0 count 0
I1213 09:35:50.821532  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1213 09:35:50.821581  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rwalk tag 0 (20fa30d 170d581c '') 
I1213 09:35:50.821757  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 2
I1213 09:35:50.821872  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa30d 170d581c '') m 644 at 0 mt 1765618309 l 24 t 0 d 0 ext )
I1213 09:35:50.822201  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 2
I1213 09:35:50.822317  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa30d 170d581c '') m 644 at 0 mt 1765618309 l 24 t 0 d 0 ext )
I1213 09:35:50.822563  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tclunk tag 0 fid 2
I1213 09:35:50.822601  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rclunk tag 0
I1213 09:35:50.822846  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1213 09:35:50.822905  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rwalk tag 0 (20fa30d 170d581c '') 
I1213 09:35:50.823316  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 2
I1213 09:35:50.823426  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa30d 170d581c '') m 644 at 0 mt 1765618309 l 24 t 0 d 0 ext )
I1213 09:35:50.823669  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tclunk tag 0 fid 2
I1213 09:35:50.823699  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rclunk tag 0
I1213 09:35:50.823999  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Twalk tag 0 fid 0 newfid 2 0:'test-1765618309150067471' 
I1213 09:35:50.824055  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rwalk tag 0 (20fa30e 170d581c '') 
I1213 09:35:50.824392  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 2
I1213 09:35:50.824492  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('test-1765618309150067471' 'jenkins' 'balintp' '' q (20fa30e 170d581c '') m 644 at 0 mt 1765618309 l 24 t 0 d 0 ext )
I1213 09:35:50.824706  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 2
I1213 09:35:50.824813  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('test-1765618309150067471' 'jenkins' 'balintp' '' q (20fa30e 170d581c '') m 644 at 0 mt 1765618309 l 24 t 0 d 0 ext )
I1213 09:35:50.824990  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tclunk tag 0 fid 2
I1213 09:35:50.825021  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rclunk tag 0
I1213 09:35:50.825178  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Twalk tag 0 fid 0 newfid 2 0:'test-1765618309150067471' 
I1213 09:35:50.825228  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rwalk tag 0 (20fa30e 170d581c '') 
I1213 09:35:50.825399  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 2
I1213 09:35:50.825509  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('test-1765618309150067471' 'jenkins' 'balintp' '' q (20fa30e 170d581c '') m 644 at 0 mt 1765618309 l 24 t 0 d 0 ext )
I1213 09:35:50.825673  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tclunk tag 0 fid 2
I1213 09:35:50.825728  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rclunk tag 0
I1213 09:35:50.825900  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1213 09:35:50.825957  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rwalk tag 0 (20fa30c 170d581c '') 
I1213 09:35:50.826181  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 2
I1213 09:35:50.826260  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa30c 170d581c '') m 644 at 0 mt 1765618309 l 24 t 0 d 0 ext )
I1213 09:35:50.826448  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 2
I1213 09:35:50.826544  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa30c 170d581c '') m 644 at 0 mt 1765618309 l 24 t 0 d 0 ext )
I1213 09:35:50.826808  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tclunk tag 0 fid 2
I1213 09:35:50.826833  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rclunk tag 0
I1213 09:35:50.827000  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1213 09:35:50.827058  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rwalk tag 0 (20fa30c 170d581c '') 
I1213 09:35:50.827193  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tstat tag 0 fid 2
I1213 09:35:50.827263  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa30c 170d581c '') m 644 at 0 mt 1765618309 l 24 t 0 d 0 ext )
I1213 09:35:50.827470  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tclunk tag 0 fid 2
I1213 09:35:50.827499  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rclunk tag 0
I1213 09:35:50.827659  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tread tag 0 fid 1 offset 258 count 262120
I1213 09:35:50.827719  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rread tag 0 count 0
I1213 09:35:50.827870  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tclunk tag 0 fid 1
I1213 09:35:50.827924  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rclunk tag 0
I1213 09:35:50.830829  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1213 09:35:50.830913  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rerror tag 0 ename 'file not found' ecode 0
I1213 09:35:50.988989  401844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.38:59574 Tclunk tag 0 fid 0
I1213 09:35:50.989059  401844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.38:59574 Rclunk tag 0
I1213 09:35:50.990667  401844 main.go:127] stdlog: ufs.go:147 disconnected
I1213 09:35:51.012004  401844 out.go:179] * Unmounting /mount-9p ...
I1213 09:35:51.013520  401844 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1213 09:35:51.022171  401844 mount.go:180] unmount for /mount-9p ran successfully
I1213 09:35:51.022300  401844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/.mount-process: {Name:mk9228d9c0f485b03a28a166be6fba8f908c45e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 09:35:51.024159  401844 out.go:203] 
W1213 09:35:51.025737  401844 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1213 09:35:51.026869  401844 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (241.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553391 service --namespace=default --https --url hello-node: exit status 115 (250.754336ms)

                                                
                                                
-- stdout --
	https://192.168.39.38:30871
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-553391 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553391 service hello-node --url --format={{.IP}}: exit status 115 (255.292224ms)

                                                
                                                
-- stdout --
	192.168.39.38
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-553391 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553391 service hello-node --url: exit status 115 (243.919268ms)

                                                
                                                
-- stdout --
	http://192.168.39.38:30871
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-553391 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.38:30871
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.24s)

                                                
                                    
x
+
TestPreload (119.81s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-547899 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1213 10:24:50.853238  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:25:19.632254  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-547899 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m0.727371749s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-547899 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-547899 image pull gcr.io/k8s-minikube/busybox: (3.48156472s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-547899
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-547899: (8.592996889s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-547899 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-547899 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (44.369608947s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-547899 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-13 10:26:38.800666371 +0000 UTC m=+4509.528626971
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-547899 -n test-preload-547899
helpers_test.go:253: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-547899 logs -n 25
helpers_test.go:261: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-501861 ssh -n multinode-501861-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-501861     │ jenkins │ v1.37.0 │ 13 Dec 25 10:14 UTC │ 13 Dec 25 10:14 UTC │
	│ ssh     │ multinode-501861 ssh -n multinode-501861 sudo cat /home/docker/cp-test_multinode-501861-m03_multinode-501861.txt                                          │ multinode-501861     │ jenkins │ v1.37.0 │ 13 Dec 25 10:14 UTC │ 13 Dec 25 10:14 UTC │
	│ cp      │ multinode-501861 cp multinode-501861-m03:/home/docker/cp-test.txt multinode-501861-m02:/home/docker/cp-test_multinode-501861-m03_multinode-501861-m02.txt │ multinode-501861     │ jenkins │ v1.37.0 │ 13 Dec 25 10:14 UTC │ 13 Dec 25 10:14 UTC │
	│ ssh     │ multinode-501861 ssh -n multinode-501861-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-501861     │ jenkins │ v1.37.0 │ 13 Dec 25 10:14 UTC │ 13 Dec 25 10:14 UTC │
	│ ssh     │ multinode-501861 ssh -n multinode-501861-m02 sudo cat /home/docker/cp-test_multinode-501861-m03_multinode-501861-m02.txt                                  │ multinode-501861     │ jenkins │ v1.37.0 │ 13 Dec 25 10:14 UTC │ 13 Dec 25 10:14 UTC │
	│ node    │ multinode-501861 node stop m03                                                                                                                            │ multinode-501861     │ jenkins │ v1.37.0 │ 13 Dec 25 10:14 UTC │ 13 Dec 25 10:14 UTC │
	│ node    │ multinode-501861 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-501861     │ jenkins │ v1.37.0 │ 13 Dec 25 10:14 UTC │ 13 Dec 25 10:14 UTC │
	│ node    │ list -p multinode-501861                                                                                                                                  │ multinode-501861     │ jenkins │ v1.37.0 │ 13 Dec 25 10:14 UTC │                     │
	│ stop    │ -p multinode-501861                                                                                                                                       │ multinode-501861     │ jenkins │ v1.37.0 │ 13 Dec 25 10:14 UTC │ 13 Dec 25 10:17 UTC │
	│ start   │ -p multinode-501861 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-501861     │ jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:19 UTC │
	│ node    │ list -p multinode-501861                                                                                                                                  │ multinode-501861     │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │                     │
	│ node    │ multinode-501861 node delete m03                                                                                                                          │ multinode-501861     │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ stop    │ multinode-501861 stop                                                                                                                                     │ multinode-501861     │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:22 UTC │
	│ start   │ -p multinode-501861 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-501861     │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:23 UTC │
	│ node    │ list -p multinode-501861                                                                                                                                  │ multinode-501861     │ jenkins │ v1.37.0 │ 13 Dec 25 10:24 UTC │                     │
	│ start   │ -p multinode-501861-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-501861-m02 │ jenkins │ v1.37.0 │ 13 Dec 25 10:24 UTC │                     │
	│ start   │ -p multinode-501861-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-501861-m03 │ jenkins │ v1.37.0 │ 13 Dec 25 10:24 UTC │ 13 Dec 25 10:24 UTC │
	│ node    │ add -p multinode-501861                                                                                                                                   │ multinode-501861     │ jenkins │ v1.37.0 │ 13 Dec 25 10:24 UTC │                     │
	│ delete  │ -p multinode-501861-m03                                                                                                                                   │ multinode-501861-m03 │ jenkins │ v1.37.0 │ 13 Dec 25 10:24 UTC │ 13 Dec 25 10:24 UTC │
	│ delete  │ -p multinode-501861                                                                                                                                       │ multinode-501861     │ jenkins │ v1.37.0 │ 13 Dec 25 10:24 UTC │ 13 Dec 25 10:24 UTC │
	│ start   │ -p test-preload-547899 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-547899  │ jenkins │ v1.37.0 │ 13 Dec 25 10:24 UTC │ 13 Dec 25 10:25 UTC │
	│ image   │ test-preload-547899 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-547899  │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ stop    │ -p test-preload-547899                                                                                                                                    │ test-preload-547899  │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ start   │ -p test-preload-547899 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-547899  │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:26 UTC │
	│ image   │ test-preload-547899 image list                                                                                                                            │ test-preload-547899  │ jenkins │ v1.37.0 │ 13 Dec 25 10:26 UTC │ 13 Dec 25 10:26 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:25:54
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:25:54.297220  422337 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:25:54.297504  422337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:25:54.297517  422337 out.go:374] Setting ErrFile to fd 2...
	I1213 10:25:54.297522  422337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:25:54.297716  422337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 10:25:54.298156  422337 out.go:368] Setting JSON to false
	I1213 10:25:54.299111  422337 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7703,"bootTime":1765613851,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 10:25:54.299174  422337 start.go:143] virtualization: kvm guest
	I1213 10:25:54.301296  422337 out.go:179] * [test-preload-547899] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 10:25:54.303115  422337 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:25:54.303125  422337 notify.go:221] Checking for updates...
	I1213 10:25:54.304416  422337 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:25:54.305697  422337 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 10:25:54.306901  422337 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	I1213 10:25:54.308022  422337 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 10:25:54.309299  422337 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:25:54.310955  422337 config.go:182] Loaded profile config "test-preload-547899": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:25:54.311446  422337 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:25:54.347414  422337 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 10:25:54.348587  422337 start.go:309] selected driver: kvm2
	I1213 10:25:54.348602  422337 start.go:927] validating driver "kvm2" against &{Name:test-preload-547899 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.2 ClusterName:test-preload-547899 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:25:54.348719  422337 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:25:54.349724  422337 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:25:54.349749  422337 cni.go:84] Creating CNI manager for ""
	I1213 10:25:54.349806  422337 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 10:25:54.349858  422337 start.go:353] cluster config:
	{Name:test-preload-547899 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:test-preload-547899 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:25:54.349939  422337 iso.go:125] acquiring lock: {Name:mk4ce8bfab58620efe86d1c7a68d79ed9c81b6ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:25:54.351453  422337 out.go:179] * Starting "test-preload-547899" primary control-plane node in "test-preload-547899" cluster
	I1213 10:25:54.352733  422337 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 10:25:54.352760  422337 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-387918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 10:25:54.352778  422337 cache.go:65] Caching tarball of preloaded images
	I1213 10:25:54.352875  422337 preload.go:238] Found /home/jenkins/minikube-integration/22127-387918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 10:25:54.352886  422337 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 10:25:54.352970  422337 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/test-preload-547899/config.json ...
	I1213 10:25:54.353176  422337 start.go:360] acquireMachinesLock for test-preload-547899: {Name:mk911c6c71130df32abbe489ec2f7be251c727ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 10:25:54.353219  422337 start.go:364] duration metric: took 24.751µs to acquireMachinesLock for "test-preload-547899"
	I1213 10:25:54.353233  422337 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:25:54.353238  422337 fix.go:54] fixHost starting: 
	I1213 10:25:54.354981  422337 fix.go:112] recreateIfNeeded on test-preload-547899: state=Stopped err=<nil>
	W1213 10:25:54.355014  422337 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:25:54.356383  422337 out.go:252] * Restarting existing kvm2 VM for "test-preload-547899" ...
	I1213 10:25:54.356409  422337 main.go:143] libmachine: starting domain...
	I1213 10:25:54.356418  422337 main.go:143] libmachine: ensuring networks are active...
	I1213 10:25:54.357102  422337 main.go:143] libmachine: Ensuring network default is active
	I1213 10:25:54.357493  422337 main.go:143] libmachine: Ensuring network mk-test-preload-547899 is active
	I1213 10:25:54.357928  422337 main.go:143] libmachine: getting domain XML...
	I1213 10:25:54.358979  422337 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-547899</name>
	  <uuid>7836aa9d-8409-416e-8860-7a54a4560312</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22127-387918/.minikube/machines/test-preload-547899/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22127-387918/.minikube/machines/test-preload-547899/test-preload-547899.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:4f:8b:3a'/>
	      <source network='mk-test-preload-547899'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:64:66:a1'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1213 10:25:55.615582  422337 main.go:143] libmachine: waiting for domain to start...
	I1213 10:25:55.617083  422337 main.go:143] libmachine: domain is now running
	I1213 10:25:55.617099  422337 main.go:143] libmachine: waiting for IP...
	I1213 10:25:55.618071  422337 main.go:143] libmachine: domain test-preload-547899 has defined MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:25:55.618632  422337 main.go:143] libmachine: domain test-preload-547899 has current primary IP address 192.168.39.150 and MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:25:55.618646  422337 main.go:143] libmachine: found domain IP: 192.168.39.150
	I1213 10:25:55.618652  422337 main.go:143] libmachine: reserving static IP address...
	I1213 10:25:55.619091  422337 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-547899", mac: "52:54:00:4f:8b:3a", ip: "192.168.39.150"} in network mk-test-preload-547899: {Iface:virbr1 ExpiryTime:2025-12-13 11:24:56 +0000 UTC Type:0 Mac:52:54:00:4f:8b:3a Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-547899 Clientid:01:52:54:00:4f:8b:3a}
	I1213 10:25:55.619121  422337 main.go:143] libmachine: skip adding static IP to network mk-test-preload-547899 - found existing host DHCP lease matching {name: "test-preload-547899", mac: "52:54:00:4f:8b:3a", ip: "192.168.39.150"}
	I1213 10:25:55.619132  422337 main.go:143] libmachine: reserved static IP address 192.168.39.150 for domain test-preload-547899
	I1213 10:25:55.619137  422337 main.go:143] libmachine: waiting for SSH...
	I1213 10:25:55.619151  422337 main.go:143] libmachine: Getting to WaitForSSH function...
	I1213 10:25:55.621462  422337 main.go:143] libmachine: domain test-preload-547899 has defined MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:25:55.621874  422337 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4f:8b:3a", ip: ""} in network mk-test-preload-547899: {Iface:virbr1 ExpiryTime:2025-12-13 11:24:56 +0000 UTC Type:0 Mac:52:54:00:4f:8b:3a Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-547899 Clientid:01:52:54:00:4f:8b:3a}
	I1213 10:25:55.621899  422337 main.go:143] libmachine: domain test-preload-547899 has defined IP address 192.168.39.150 and MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:25:55.622069  422337 main.go:143] libmachine: Using SSH client type: native
	I1213 10:25:55.622308  422337 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1213 10:25:55.622339  422337 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1213 10:25:58.731666  422337 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.150:22: connect: no route to host
	I1213 10:26:04.811633  422337 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.150:22: connect: no route to host
	I1213 10:26:07.916170  422337 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:26:07.919494  422337 main.go:143] libmachine: domain test-preload-547899 has defined MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:07.920016  422337 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4f:8b:3a", ip: ""} in network mk-test-preload-547899: {Iface:virbr1 ExpiryTime:2025-12-13 11:26:05 +0000 UTC Type:0 Mac:52:54:00:4f:8b:3a Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-547899 Clientid:01:52:54:00:4f:8b:3a}
	I1213 10:26:07.920047  422337 main.go:143] libmachine: domain test-preload-547899 has defined IP address 192.168.39.150 and MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:07.920288  422337 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/test-preload-547899/config.json ...
	I1213 10:26:07.920539  422337 machine.go:94] provisionDockerMachine start ...
	I1213 10:26:07.922908  422337 main.go:143] libmachine: domain test-preload-547899 has defined MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:07.923251  422337 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4f:8b:3a", ip: ""} in network mk-test-preload-547899: {Iface:virbr1 ExpiryTime:2025-12-13 11:26:05 +0000 UTC Type:0 Mac:52:54:00:4f:8b:3a Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-547899 Clientid:01:52:54:00:4f:8b:3a}
	I1213 10:26:07.923273  422337 main.go:143] libmachine: domain test-preload-547899 has defined IP address 192.168.39.150 and MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:07.923423  422337 main.go:143] libmachine: Using SSH client type: native
	I1213 10:26:07.923626  422337 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1213 10:26:07.923636  422337 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:26:08.029066  422337 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1213 10:26:08.029091  422337 buildroot.go:166] provisioning hostname "test-preload-547899"
	I1213 10:26:08.031657  422337 main.go:143] libmachine: domain test-preload-547899 has defined MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:08.032165  422337 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4f:8b:3a", ip: ""} in network mk-test-preload-547899: {Iface:virbr1 ExpiryTime:2025-12-13 11:26:05 +0000 UTC Type:0 Mac:52:54:00:4f:8b:3a Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-547899 Clientid:01:52:54:00:4f:8b:3a}
	I1213 10:26:08.032192  422337 main.go:143] libmachine: domain test-preload-547899 has defined IP address 192.168.39.150 and MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:08.032340  422337 main.go:143] libmachine: Using SSH client type: native
	I1213 10:26:08.032551  422337 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1213 10:26:08.032565  422337 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-547899 && echo "test-preload-547899" | sudo tee /etc/hostname
	I1213 10:26:08.152639  422337 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-547899
	
	I1213 10:26:08.155669  422337 main.go:143] libmachine: domain test-preload-547899 has defined MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:08.156113  422337 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4f:8b:3a", ip: ""} in network mk-test-preload-547899: {Iface:virbr1 ExpiryTime:2025-12-13 11:26:05 +0000 UTC Type:0 Mac:52:54:00:4f:8b:3a Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-547899 Clientid:01:52:54:00:4f:8b:3a}
	I1213 10:26:08.156141  422337 main.go:143] libmachine: domain test-preload-547899 has defined IP address 192.168.39.150 and MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:08.156362  422337 main.go:143] libmachine: Using SSH client type: native
	I1213 10:26:08.156599  422337 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1213 10:26:08.156617  422337 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-547899' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-547899/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-547899' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:26:08.274136  422337 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:26:08.274171  422337 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22127-387918/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-387918/.minikube}
	I1213 10:26:08.274193  422337 buildroot.go:174] setting up certificates
	I1213 10:26:08.274204  422337 provision.go:84] configureAuth start
	I1213 10:26:08.277222  422337 main.go:143] libmachine: domain test-preload-547899 has defined MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:08.277746  422337 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4f:8b:3a", ip: ""} in network mk-test-preload-547899: {Iface:virbr1 ExpiryTime:2025-12-13 11:26:05 +0000 UTC Type:0 Mac:52:54:00:4f:8b:3a Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-547899 Clientid:01:52:54:00:4f:8b:3a}
	I1213 10:26:08.277781  422337 main.go:143] libmachine: domain test-preload-547899 has defined IP address 192.168.39.150 and MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:08.280406  422337 main.go:143] libmachine: domain test-preload-547899 has defined MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:08.280788  422337 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4f:8b:3a", ip: ""} in network mk-test-preload-547899: {Iface:virbr1 ExpiryTime:2025-12-13 11:26:05 +0000 UTC Type:0 Mac:52:54:00:4f:8b:3a Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-547899 Clientid:01:52:54:00:4f:8b:3a}
	I1213 10:26:08.280813  422337 main.go:143] libmachine: domain test-preload-547899 has defined IP address 192.168.39.150 and MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:08.280975  422337 provision.go:143] copyHostCerts
	I1213 10:26:08.281035  422337 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-387918/.minikube/ca.pem, removing ...
	I1213 10:26:08.281051  422337 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-387918/.minikube/ca.pem
	I1213 10:26:08.281117  422337 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-387918/.minikube/ca.pem (1078 bytes)
	I1213 10:26:08.281225  422337 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-387918/.minikube/cert.pem, removing ...
	I1213 10:26:08.281234  422337 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-387918/.minikube/cert.pem
	I1213 10:26:08.281261  422337 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-387918/.minikube/cert.pem (1123 bytes)
	I1213 10:26:08.281348  422337 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-387918/.minikube/key.pem, removing ...
	I1213 10:26:08.281356  422337 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-387918/.minikube/key.pem
	I1213 10:26:08.281380  422337 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-387918/.minikube/key.pem (1675 bytes)
	I1213 10:26:08.281444  422337 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-387918/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca-key.pem org=jenkins.test-preload-547899 san=[127.0.0.1 192.168.39.150 localhost minikube test-preload-547899]
	I1213 10:26:08.339403  422337 provision.go:177] copyRemoteCerts
	I1213 10:26:08.339473  422337 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:26:08.342523  422337 main.go:143] libmachine: domain test-preload-547899 has defined MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:08.342950  422337 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4f:8b:3a", ip: ""} in network mk-test-preload-547899: {Iface:virbr1 ExpiryTime:2025-12-13 11:26:05 +0000 UTC Type:0 Mac:52:54:00:4f:8b:3a Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-547899 Clientid:01:52:54:00:4f:8b:3a}
	I1213 10:26:08.342979  422337 main.go:143] libmachine: domain test-preload-547899 has defined IP address 192.168.39.150 and MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:08.343176  422337 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/test-preload-547899/id_rsa Username:docker}
	I1213 10:26:08.429103  422337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 10:26:08.466059  422337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1213 10:26:08.499040  422337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:26:08.529815  422337 provision.go:87] duration metric: took 255.573188ms to configureAuth
	I1213 10:26:08.529847  422337 buildroot.go:189] setting minikube options for container-runtime
	I1213 10:26:08.530050  422337 config.go:182] Loaded profile config "test-preload-547899": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:26:08.533042  422337 main.go:143] libmachine: domain test-preload-547899 has defined MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:08.533475  422337 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4f:8b:3a", ip: ""} in network mk-test-preload-547899: {Iface:virbr1 ExpiryTime:2025-12-13 11:26:05 +0000 UTC Type:0 Mac:52:54:00:4f:8b:3a Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-547899 Clientid:01:52:54:00:4f:8b:3a}
	I1213 10:26:08.533503  422337 main.go:143] libmachine: domain test-preload-547899 has defined IP address 192.168.39.150 and MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:08.533656  422337 main.go:143] libmachine: Using SSH client type: native
	I1213 10:26:08.533872  422337 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1213 10:26:08.533901  422337 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:26:08.769383  422337 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:26:08.769411  422337 machine.go:97] duration metric: took 848.855853ms to provisionDockerMachine
	I1213 10:26:08.769424  422337 start.go:293] postStartSetup for "test-preload-547899" (driver="kvm2")
	I1213 10:26:08.769435  422337 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:26:08.769493  422337 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:26:08.773085  422337 main.go:143] libmachine: domain test-preload-547899 has defined MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:08.773687  422337 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4f:8b:3a", ip: ""} in network mk-test-preload-547899: {Iface:virbr1 ExpiryTime:2025-12-13 11:26:05 +0000 UTC Type:0 Mac:52:54:00:4f:8b:3a Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-547899 Clientid:01:52:54:00:4f:8b:3a}
	I1213 10:26:08.773716  422337 main.go:143] libmachine: domain test-preload-547899 has defined IP address 192.168.39.150 and MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:08.773891  422337 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/test-preload-547899/id_rsa Username:docker}
	I1213 10:26:08.857244  422337 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:26:08.864927  422337 info.go:137] Remote host: Buildroot 2025.02
	I1213 10:26:08.864990  422337 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-387918/.minikube/addons for local assets ...
	I1213 10:26:08.865072  422337 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-387918/.minikube/files for local assets ...
	I1213 10:26:08.865167  422337 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/ssl/certs/3918772.pem -> 3918772.pem in /etc/ssl/certs
	I1213 10:26:08.865291  422337 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:26:08.879080  422337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/ssl/certs/3918772.pem --> /etc/ssl/certs/3918772.pem (1708 bytes)
	I1213 10:26:08.908515  422337 start.go:296] duration metric: took 139.072114ms for postStartSetup
	I1213 10:26:08.908565  422337 fix.go:56] duration metric: took 14.555326581s for fixHost
	I1213 10:26:08.911299  422337 main.go:143] libmachine: domain test-preload-547899 has defined MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:08.911668  422337 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4f:8b:3a", ip: ""} in network mk-test-preload-547899: {Iface:virbr1 ExpiryTime:2025-12-13 11:26:05 +0000 UTC Type:0 Mac:52:54:00:4f:8b:3a Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-547899 Clientid:01:52:54:00:4f:8b:3a}
	I1213 10:26:08.911698  422337 main.go:143] libmachine: domain test-preload-547899 has defined IP address 192.168.39.150 and MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:08.911868  422337 main.go:143] libmachine: Using SSH client type: native
	I1213 10:26:08.912176  422337 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I1213 10:26:08.912203  422337 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1213 10:26:09.015953  422337 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765621568.980497796
	
	I1213 10:26:09.015984  422337 fix.go:216] guest clock: 1765621568.980497796
	I1213 10:26:09.015992  422337 fix.go:229] Guest: 2025-12-13 10:26:08.980497796 +0000 UTC Remote: 2025-12-13 10:26:08.908570146 +0000 UTC m=+14.664980920 (delta=71.92765ms)
	I1213 10:26:09.016015  422337 fix.go:200] guest clock delta is within tolerance: 71.92765ms
	I1213 10:26:09.016020  422337 start.go:83] releasing machines lock for "test-preload-547899", held for 14.662791944s
	I1213 10:26:09.018570  422337 main.go:143] libmachine: domain test-preload-547899 has defined MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:09.019060  422337 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4f:8b:3a", ip: ""} in network mk-test-preload-547899: {Iface:virbr1 ExpiryTime:2025-12-13 11:26:05 +0000 UTC Type:0 Mac:52:54:00:4f:8b:3a Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-547899 Clientid:01:52:54:00:4f:8b:3a}
	I1213 10:26:09.019091  422337 main.go:143] libmachine: domain test-preload-547899 has defined IP address 192.168.39.150 and MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:09.019597  422337 ssh_runner.go:195] Run: cat /version.json
	I1213 10:26:09.019670  422337 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:26:09.022975  422337 main.go:143] libmachine: domain test-preload-547899 has defined MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:09.023023  422337 main.go:143] libmachine: domain test-preload-547899 has defined MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:09.023462  422337 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4f:8b:3a", ip: ""} in network mk-test-preload-547899: {Iface:virbr1 ExpiryTime:2025-12-13 11:26:05 +0000 UTC Type:0 Mac:52:54:00:4f:8b:3a Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-547899 Clientid:01:52:54:00:4f:8b:3a}
	I1213 10:26:09.023497  422337 main.go:143] libmachine: domain test-preload-547899 has defined IP address 192.168.39.150 and MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:09.023555  422337 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4f:8b:3a", ip: ""} in network mk-test-preload-547899: {Iface:virbr1 ExpiryTime:2025-12-13 11:26:05 +0000 UTC Type:0 Mac:52:54:00:4f:8b:3a Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-547899 Clientid:01:52:54:00:4f:8b:3a}
	I1213 10:26:09.023592  422337 main.go:143] libmachine: domain test-preload-547899 has defined IP address 192.168.39.150 and MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:09.023667  422337 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/test-preload-547899/id_rsa Username:docker}
	I1213 10:26:09.023895  422337 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/test-preload-547899/id_rsa Username:docker}
	I1213 10:26:09.123081  422337 ssh_runner.go:195] Run: systemctl --version
	I1213 10:26:09.129412  422337 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:26:09.276504  422337 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:26:09.282919  422337 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:26:09.282992  422337 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:26:09.302360  422337 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 10:26:09.302391  422337 start.go:496] detecting cgroup driver to use...
	I1213 10:26:09.302458  422337 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:26:09.322036  422337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:26:09.339419  422337 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:26:09.339488  422337 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:26:09.357093  422337 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:26:09.374835  422337 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:26:09.516683  422337 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:26:09.735840  422337 docker.go:234] disabling docker service ...
	I1213 10:26:09.735931  422337 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:26:09.752449  422337 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:26:09.767408  422337 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:26:09.923765  422337 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:26:10.077534  422337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:26:10.093436  422337 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:26:10.115759  422337 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:26:10.115853  422337 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:26:10.127952  422337 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:26:10.128019  422337 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:26:10.140024  422337 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:26:10.151661  422337 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:26:10.163283  422337 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:26:10.175503  422337 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:26:10.187247  422337 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:26:10.206265  422337 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:26:10.217745  422337 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:26:10.227937  422337 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 10:26:10.228004  422337 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 10:26:10.246531  422337 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:26:10.257720  422337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:26:10.395343  422337 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:26:10.501466  422337 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:26:10.501537  422337 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:26:10.506688  422337 start.go:564] Will wait 60s for crictl version
	I1213 10:26:10.506774  422337 ssh_runner.go:195] Run: which crictl
	I1213 10:26:10.510641  422337 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 10:26:10.543771  422337 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 10:26:10.543896  422337 ssh_runner.go:195] Run: crio --version
	I1213 10:26:10.572362  422337 ssh_runner.go:195] Run: crio --version
	I1213 10:26:10.602288  422337 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1213 10:26:10.606665  422337 main.go:143] libmachine: domain test-preload-547899 has defined MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:10.607105  422337 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4f:8b:3a", ip: ""} in network mk-test-preload-547899: {Iface:virbr1 ExpiryTime:2025-12-13 11:26:05 +0000 UTC Type:0 Mac:52:54:00:4f:8b:3a Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-547899 Clientid:01:52:54:00:4f:8b:3a}
	I1213 10:26:10.607131  422337 main.go:143] libmachine: domain test-preload-547899 has defined IP address 192.168.39.150 and MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:10.607357  422337 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1213 10:26:10.611929  422337 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:26:10.626604  422337 kubeadm.go:884] updating cluster {Name:test-preload-547899 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.2 ClusterName:test-preload-547899 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:26:10.626755  422337 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 10:26:10.626810  422337 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:26:10.658629  422337 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1213 10:26:10.658704  422337 ssh_runner.go:195] Run: which lz4
	I1213 10:26:10.663269  422337 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 10:26:10.667956  422337 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 10:26:10.667995  422337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1213 10:26:11.924014  422337 crio.go:462] duration metric: took 1.260783751s to copy over tarball
	I1213 10:26:11.924106  422337 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 10:26:13.540289  422337 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.616147904s)
	I1213 10:26:13.540342  422337 crio.go:469] duration metric: took 1.616289896s to extract the tarball
	I1213 10:26:13.540354  422337 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 10:26:13.576870  422337 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:26:13.614072  422337 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:26:13.614101  422337 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:26:13.614110  422337 kubeadm.go:935] updating node { 192.168.39.150 8443 v1.34.2 crio true true} ...
	I1213 10:26:13.614253  422337 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-547899 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:test-preload-547899 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:26:13.614373  422337 ssh_runner.go:195] Run: crio config
	I1213 10:26:13.659481  422337 cni.go:84] Creating CNI manager for ""
	I1213 10:26:13.659529  422337 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 10:26:13.659551  422337 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:26:13.659600  422337 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.150 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-547899 NodeName:test-preload-547899 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:26:13.659975  422337 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-547899"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.150"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.150"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:26:13.660066  422337 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 10:26:13.672306  422337 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:26:13.672404  422337 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:26:13.683890  422337 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1213 10:26:13.703560  422337 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 10:26:13.722790  422337 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1213 10:26:13.742370  422337 ssh_runner.go:195] Run: grep 192.168.39.150	control-plane.minikube.internal$ /etc/hosts
	I1213 10:26:13.746652  422337 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:26:13.762736  422337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:26:13.899075  422337 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:26:13.929382  422337 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/test-preload-547899 for IP: 192.168.39.150
	I1213 10:26:13.929417  422337 certs.go:195] generating shared ca certs ...
	I1213 10:26:13.929439  422337 certs.go:227] acquiring lock for ca certs: {Name:mkd63ae6418df38b62936a9f8faa40fdd87e4397 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:26:13.929655  422337 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-387918/.minikube/ca.key
	I1213 10:26:13.929744  422337 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.key
	I1213 10:26:13.929764  422337 certs.go:257] generating profile certs ...
	I1213 10:26:13.929893  422337 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/test-preload-547899/client.key
	I1213 10:26:13.930014  422337 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/test-preload-547899/apiserver.key.6b36b5f8
	I1213 10:26:13.930108  422337 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/test-preload-547899/proxy-client.key
	I1213 10:26:13.930288  422337 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/391877.pem (1338 bytes)
	W1213 10:26:13.930355  422337 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-387918/.minikube/certs/391877_empty.pem, impossibly tiny 0 bytes
	I1213 10:26:13.930371  422337 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:26:13.930413  422337 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem (1078 bytes)
	I1213 10:26:13.930454  422337 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:26:13.930490  422337 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/key.pem (1675 bytes)
	I1213 10:26:13.930563  422337 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/ssl/certs/3918772.pem (1708 bytes)
	I1213 10:26:13.931461  422337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:26:13.970777  422337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:26:14.010244  422337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:26:14.039676  422337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 10:26:14.070571  422337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/test-preload-547899/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1213 10:26:14.099815  422337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/test-preload-547899/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:26:14.127717  422337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/test-preload-547899/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:26:14.155632  422337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/test-preload-547899/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 10:26:14.184210  422337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/certs/391877.pem --> /usr/share/ca-certificates/391877.pem (1338 bytes)
	I1213 10:26:14.211471  422337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/ssl/certs/3918772.pem --> /usr/share/ca-certificates/3918772.pem (1708 bytes)
	I1213 10:26:14.239809  422337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:26:14.269004  422337 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:26:14.289820  422337 ssh_runner.go:195] Run: openssl version
	I1213 10:26:14.296120  422337 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3918772.pem
	I1213 10:26:14.307745  422337 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3918772.pem /etc/ssl/certs/3918772.pem
	I1213 10:26:14.319791  422337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3918772.pem
	I1213 10:26:14.324947  422337 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 09:23 /usr/share/ca-certificates/3918772.pem
	I1213 10:26:14.325020  422337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3918772.pem
	I1213 10:26:14.332209  422337 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:26:14.343165  422337 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3918772.pem /etc/ssl/certs/3ec20f2e.0
	I1213 10:26:14.354039  422337 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:26:14.364353  422337 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:26:14.374906  422337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:26:14.379742  422337 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:26:14.379796  422337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:26:14.386644  422337 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:26:14.398061  422337 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 10:26:14.409751  422337 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/391877.pem
	I1213 10:26:14.421674  422337 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/391877.pem /etc/ssl/certs/391877.pem
	I1213 10:26:14.432297  422337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391877.pem
	I1213 10:26:14.437212  422337 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 09:23 /usr/share/ca-certificates/391877.pem
	I1213 10:26:14.437274  422337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391877.pem
	I1213 10:26:14.444066  422337 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:26:14.454714  422337 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/391877.pem /etc/ssl/certs/51391683.0
	I1213 10:26:14.465147  422337 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:26:14.470004  422337 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:26:14.476953  422337 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:26:14.483697  422337 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:26:14.491191  422337 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:26:14.498005  422337 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:26:14.504932  422337 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:26:14.511869  422337 kubeadm.go:401] StartCluster: {Name:test-preload-547899 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.2 ClusterName:test-preload-547899 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:26:14.511953  422337 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:26:14.512010  422337 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:26:14.545494  422337 cri.go:89] found id: ""
	I1213 10:26:14.545587  422337 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:26:14.557248  422337 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:26:14.557264  422337 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:26:14.557306  422337 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:26:14.568769  422337 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:26:14.569317  422337 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-547899" does not appear in /home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 10:26:14.569448  422337 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-387918/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-547899" cluster setting kubeconfig missing "test-preload-547899" context setting]
	I1213 10:26:14.569697  422337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/kubeconfig: {Name:mkc4c188214419e87992ca29ee1229c54fdde2b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:26:14.570256  422337 kapi.go:59] client config for test-preload-547899: &rest.Config{Host:"https://192.168.39.150:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-387918/.minikube/profiles/test-preload-547899/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-387918/.minikube/profiles/test-preload-547899/client.key", CAFile:"/home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:26:14.570682  422337 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 10:26:14.570697  422337 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 10:26:14.570701  422337 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 10:26:14.570705  422337 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 10:26:14.570708  422337 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 10:26:14.571111  422337 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:26:14.581594  422337 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.150
	I1213 10:26:14.581625  422337 kubeadm.go:1161] stopping kube-system containers ...
	I1213 10:26:14.581640  422337 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 10:26:14.581689  422337 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:26:14.612830  422337 cri.go:89] found id: ""
	I1213 10:26:14.612906  422337 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 10:26:14.634507  422337 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:26:14.645315  422337 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:26:14.645350  422337 kubeadm.go:158] found existing configuration files:
	
	I1213 10:26:14.645406  422337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:26:14.655830  422337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:26:14.655883  422337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:26:14.666904  422337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:26:14.676867  422337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:26:14.676952  422337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:26:14.687933  422337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:26:14.698031  422337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:26:14.698114  422337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:26:14.708912  422337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:26:14.719011  422337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:26:14.719087  422337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:26:14.729755  422337 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:26:14.740452  422337 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:26:14.791609  422337 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:26:16.248424  422337 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.456770712s)
	I1213 10:26:16.248534  422337 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:26:16.498075  422337 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:26:16.561366  422337 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:26:16.656184  422337 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:26:16.656307  422337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:17.156485  422337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:17.656564  422337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:18.157388  422337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:18.657384  422337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:18.693893  422337 api_server.go:72] duration metric: took 2.037719924s to wait for apiserver process to appear ...
	I1213 10:26:18.693925  422337 api_server.go:88] waiting for apiserver healthz status ...
	I1213 10:26:18.693947  422337 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1213 10:26:18.694516  422337 api_server.go:269] stopped: https://192.168.39.150:8443/healthz: Get "https://192.168.39.150:8443/healthz": dial tcp 192.168.39.150:8443: connect: connection refused
	I1213 10:26:19.194217  422337 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1213 10:26:21.356981  422337 api_server.go:279] https://192.168.39.150:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 10:26:21.357038  422337 api_server.go:103] status: https://192.168.39.150:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 10:26:21.357055  422337 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1213 10:26:21.463192  422337 api_server.go:279] https://192.168.39.150:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 10:26:21.463226  422337 api_server.go:103] status: https://192.168.39.150:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 10:26:21.694728  422337 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1213 10:26:21.700220  422337 api_server.go:279] https://192.168.39.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 10:26:21.700247  422337 api_server.go:103] status: https://192.168.39.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 10:26:22.194983  422337 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1213 10:26:22.200111  422337 api_server.go:279] https://192.168.39.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 10:26:22.200137  422337 api_server.go:103] status: https://192.168.39.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 10:26:22.694882  422337 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1213 10:26:22.700578  422337 api_server.go:279] https://192.168.39.150:8443/healthz returned 200:
	ok
	I1213 10:26:22.708484  422337 api_server.go:141] control plane version: v1.34.2
	I1213 10:26:22.708522  422337 api_server.go:131] duration metric: took 4.014588602s to wait for apiserver health ...
	I1213 10:26:22.708534  422337 cni.go:84] Creating CNI manager for ""
	I1213 10:26:22.708543  422337 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 10:26:22.710446  422337 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 10:26:22.711661  422337 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 10:26:22.724105  422337 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 10:26:22.759150  422337 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 10:26:22.766005  422337 system_pods.go:59] 7 kube-system pods found
	I1213 10:26:22.766048  422337 system_pods.go:61] "coredns-66bc5c9577-lbqwg" [ced6945d-c35e-4286-a345-104ac8208f20] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:26:22.766057  422337 system_pods.go:61] "etcd-test-preload-547899" [873e74c2-5366-4c99-9266-95b45ce3cbc5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:26:22.766065  422337 system_pods.go:61] "kube-apiserver-test-preload-547899" [df2e4aca-1299-4d3b-b387-1738a5b97cf2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 10:26:22.766071  422337 system_pods.go:61] "kube-controller-manager-test-preload-547899" [cccc34f4-b715-4414-a9c1-a7a065f445a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:26:22.766078  422337 system_pods.go:61] "kube-proxy-t7rjs" [34cd855b-5482-45fc-a273-d83e9819bed6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 10:26:22.766084  422337 system_pods.go:61] "kube-scheduler-test-preload-547899" [1fc7fa76-d62c-4c6f-add2-d3ec685f8747] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:26:22.766088  422337 system_pods.go:61] "storage-provisioner" [2c2d34e4-32e5-4226-bb45-7c22b5d162a9] Running
	I1213 10:26:22.766095  422337 system_pods.go:74] duration metric: took 6.9171ms to wait for pod list to return data ...
	I1213 10:26:22.766103  422337 node_conditions.go:102] verifying NodePressure condition ...
	I1213 10:26:22.769872  422337 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 10:26:22.769909  422337 node_conditions.go:123] node cpu capacity is 2
	I1213 10:26:22.769930  422337 node_conditions.go:105] duration metric: took 3.820209ms to run NodePressure ...
	I1213 10:26:22.770000  422337 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:26:23.037011  422337 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1213 10:26:23.041183  422337 kubeadm.go:744] kubelet initialised
	I1213 10:26:23.041214  422337 kubeadm.go:745] duration metric: took 4.161835ms waiting for restarted kubelet to initialise ...
	I1213 10:26:23.041241  422337 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 10:26:23.056445  422337 ops.go:34] apiserver oom_adj: -16
	I1213 10:26:23.056471  422337 kubeadm.go:602] duration metric: took 8.499200515s to restartPrimaryControlPlane
	I1213 10:26:23.056486  422337 kubeadm.go:403] duration metric: took 8.544620734s to StartCluster
	I1213 10:26:23.056514  422337 settings.go:142] acquiring lock: {Name:mk59569246b81cd6fde64cc849a423eeb59f3563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:26:23.056598  422337 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 10:26:23.057175  422337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/kubeconfig: {Name:mkc4c188214419e87992ca29ee1229c54fdde2b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:26:23.057455  422337 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:26:23.057535  422337 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:26:23.057639  422337 addons.go:70] Setting storage-provisioner=true in profile "test-preload-547899"
	I1213 10:26:23.057666  422337 config.go:182] Loaded profile config "test-preload-547899": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:26:23.057673  422337 addons.go:70] Setting default-storageclass=true in profile "test-preload-547899"
	I1213 10:26:23.057699  422337 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-547899"
	I1213 10:26:23.057675  422337 addons.go:239] Setting addon storage-provisioner=true in "test-preload-547899"
	W1213 10:26:23.057737  422337 addons.go:248] addon storage-provisioner should already be in state true
	I1213 10:26:23.057763  422337 host.go:66] Checking if "test-preload-547899" exists ...
	I1213 10:26:23.059161  422337 out.go:179] * Verifying Kubernetes components...
	I1213 10:26:23.060031  422337 kapi.go:59] client config for test-preload-547899: &rest.Config{Host:"https://192.168.39.150:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-387918/.minikube/profiles/test-preload-547899/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-387918/.minikube/profiles/test-preload-547899/client.key", CAFile:"/home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:26:23.060343  422337 addons.go:239] Setting addon default-storageclass=true in "test-preload-547899"
	W1213 10:26:23.060360  422337 addons.go:248] addon default-storageclass should already be in state true
	I1213 10:26:23.060385  422337 host.go:66] Checking if "test-preload-547899" exists ...
	I1213 10:26:23.061218  422337 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:26:23.061273  422337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:26:23.062086  422337 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:26:23.062103  422337 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:26:23.062653  422337 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:26:23.062674  422337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:26:23.065532  422337 main.go:143] libmachine: domain test-preload-547899 has defined MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:23.065983  422337 main.go:143] libmachine: domain test-preload-547899 has defined MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:23.065985  422337 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4f:8b:3a", ip: ""} in network mk-test-preload-547899: {Iface:virbr1 ExpiryTime:2025-12-13 11:26:05 +0000 UTC Type:0 Mac:52:54:00:4f:8b:3a Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-547899 Clientid:01:52:54:00:4f:8b:3a}
	I1213 10:26:23.066134  422337 main.go:143] libmachine: domain test-preload-547899 has defined IP address 192.168.39.150 and MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:23.066282  422337 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/test-preload-547899/id_rsa Username:docker}
	I1213 10:26:23.066492  422337 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4f:8b:3a", ip: ""} in network mk-test-preload-547899: {Iface:virbr1 ExpiryTime:2025-12-13 11:26:05 +0000 UTC Type:0 Mac:52:54:00:4f:8b:3a Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-547899 Clientid:01:52:54:00:4f:8b:3a}
	I1213 10:26:23.066522  422337 main.go:143] libmachine: domain test-preload-547899 has defined IP address 192.168.39.150 and MAC address 52:54:00:4f:8b:3a in network mk-test-preload-547899
	I1213 10:26:23.066636  422337 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/test-preload-547899/id_rsa Username:docker}
	I1213 10:26:23.249068  422337 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:26:23.269917  422337 node_ready.go:35] waiting up to 6m0s for node "test-preload-547899" to be "Ready" ...
	I1213 10:26:23.272701  422337 node_ready.go:49] node "test-preload-547899" is "Ready"
	I1213 10:26:23.272731  422337 node_ready.go:38] duration metric: took 2.777226ms for node "test-preload-547899" to be "Ready" ...
	I1213 10:26:23.272744  422337 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:26:23.272807  422337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:23.291540  422337 api_server.go:72] duration metric: took 234.045736ms to wait for apiserver process to appear ...
	I1213 10:26:23.291569  422337 api_server.go:88] waiting for apiserver healthz status ...
	I1213 10:26:23.291589  422337 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I1213 10:26:23.297513  422337 api_server.go:279] https://192.168.39.150:8443/healthz returned 200:
	ok
	I1213 10:26:23.298903  422337 api_server.go:141] control plane version: v1.34.2
	I1213 10:26:23.298928  422337 api_server.go:131] duration metric: took 7.352774ms to wait for apiserver health ...
	I1213 10:26:23.298937  422337 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 10:26:23.302152  422337 system_pods.go:59] 7 kube-system pods found
	I1213 10:26:23.302189  422337 system_pods.go:61] "coredns-66bc5c9577-lbqwg" [ced6945d-c35e-4286-a345-104ac8208f20] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:26:23.302199  422337 system_pods.go:61] "etcd-test-preload-547899" [873e74c2-5366-4c99-9266-95b45ce3cbc5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:26:23.302210  422337 system_pods.go:61] "kube-apiserver-test-preload-547899" [df2e4aca-1299-4d3b-b387-1738a5b97cf2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 10:26:23.302218  422337 system_pods.go:61] "kube-controller-manager-test-preload-547899" [cccc34f4-b715-4414-a9c1-a7a065f445a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:26:23.302225  422337 system_pods.go:61] "kube-proxy-t7rjs" [34cd855b-5482-45fc-a273-d83e9819bed6] Running
	I1213 10:26:23.302236  422337 system_pods.go:61] "kube-scheduler-test-preload-547899" [1fc7fa76-d62c-4c6f-add2-d3ec685f8747] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:26:23.302246  422337 system_pods.go:61] "storage-provisioner" [2c2d34e4-32e5-4226-bb45-7c22b5d162a9] Running
	I1213 10:26:23.302256  422337 system_pods.go:74] duration metric: took 3.31206ms to wait for pod list to return data ...
	I1213 10:26:23.302267  422337 default_sa.go:34] waiting for default service account to be created ...
	I1213 10:26:23.305094  422337 default_sa.go:45] found service account: "default"
	I1213 10:26:23.305116  422337 default_sa.go:55] duration metric: took 2.842408ms for default service account to be created ...
	I1213 10:26:23.305125  422337 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 10:26:23.307944  422337 system_pods.go:86] 7 kube-system pods found
	I1213 10:26:23.307979  422337 system_pods.go:89] "coredns-66bc5c9577-lbqwg" [ced6945d-c35e-4286-a345-104ac8208f20] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:26:23.307992  422337 system_pods.go:89] "etcd-test-preload-547899" [873e74c2-5366-4c99-9266-95b45ce3cbc5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 10:26:23.308004  422337 system_pods.go:89] "kube-apiserver-test-preload-547899" [df2e4aca-1299-4d3b-b387-1738a5b97cf2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 10:26:23.308017  422337 system_pods.go:89] "kube-controller-manager-test-preload-547899" [cccc34f4-b715-4414-a9c1-a7a065f445a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:26:23.308024  422337 system_pods.go:89] "kube-proxy-t7rjs" [34cd855b-5482-45fc-a273-d83e9819bed6] Running
	I1213 10:26:23.308039  422337 system_pods.go:89] "kube-scheduler-test-preload-547899" [1fc7fa76-d62c-4c6f-add2-d3ec685f8747] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:26:23.308050  422337 system_pods.go:89] "storage-provisioner" [2c2d34e4-32e5-4226-bb45-7c22b5d162a9] Running
	I1213 10:26:23.308060  422337 system_pods.go:126] duration metric: took 2.928427ms to wait for k8s-apps to be running ...
	I1213 10:26:23.308069  422337 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 10:26:23.308117  422337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:26:23.325149  422337 system_svc.go:56] duration metric: took 17.066343ms WaitForService to wait for kubelet
	I1213 10:26:23.325185  422337 kubeadm.go:587] duration metric: took 267.69816ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:26:23.325203  422337 node_conditions.go:102] verifying NodePressure condition ...
	I1213 10:26:23.328020  422337 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 10:26:23.328046  422337 node_conditions.go:123] node cpu capacity is 2
	I1213 10:26:23.328061  422337 node_conditions.go:105] duration metric: took 2.85306ms to run NodePressure ...
	I1213 10:26:23.328076  422337 start.go:242] waiting for startup goroutines ...
	I1213 10:26:23.390426  422337 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:26:23.395224  422337 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:26:24.220798  422337 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1213 10:26:24.222176  422337 addons.go:530] duration metric: took 1.164653377s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1213 10:26:24.222219  422337 start.go:247] waiting for cluster config update ...
	I1213 10:26:24.222231  422337 start.go:256] writing updated cluster config ...
	I1213 10:26:24.222496  422337 ssh_runner.go:195] Run: rm -f paused
	I1213 10:26:24.227630  422337 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 10:26:24.228129  422337 kapi.go:59] client config for test-preload-547899: &rest.Config{Host:"https://192.168.39.150:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-387918/.minikube/profiles/test-preload-547899/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-387918/.minikube/profiles/test-preload-547899/client.key", CAFile:"/home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:26:24.231587  422337 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lbqwg" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 10:26:26.237578  422337 pod_ready.go:104] pod "coredns-66bc5c9577-lbqwg" is not "Ready", error: <nil>
	W1213 10:26:28.238897  422337 pod_ready.go:104] pod "coredns-66bc5c9577-lbqwg" is not "Ready", error: <nil>
	I1213 10:26:30.738047  422337 pod_ready.go:94] pod "coredns-66bc5c9577-lbqwg" is "Ready"
	I1213 10:26:30.738076  422337 pod_ready.go:86] duration metric: took 6.506461646s for pod "coredns-66bc5c9577-lbqwg" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:26:30.741154  422337 pod_ready.go:83] waiting for pod "etcd-test-preload-547899" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 10:26:32.750058  422337 pod_ready.go:104] pod "etcd-test-preload-547899" is not "Ready", error: <nil>
	W1213 10:26:35.246262  422337 pod_ready.go:104] pod "etcd-test-preload-547899" is not "Ready", error: <nil>
	I1213 10:26:35.747746  422337 pod_ready.go:94] pod "etcd-test-preload-547899" is "Ready"
	I1213 10:26:35.747780  422337 pod_ready.go:86] duration metric: took 5.006597005s for pod "etcd-test-preload-547899" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:26:35.750288  422337 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-547899" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 10:26:37.756545  422337 pod_ready.go:104] pod "kube-apiserver-test-preload-547899" is not "Ready", error: <nil>
	I1213 10:26:38.256770  422337 pod_ready.go:94] pod "kube-apiserver-test-preload-547899" is "Ready"
	I1213 10:26:38.256809  422337 pod_ready.go:86] duration metric: took 2.506491736s for pod "kube-apiserver-test-preload-547899" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:26:38.259295  422337 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-547899" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:26:38.263454  422337 pod_ready.go:94] pod "kube-controller-manager-test-preload-547899" is "Ready"
	I1213 10:26:38.263477  422337 pod_ready.go:86] duration metric: took 4.158744ms for pod "kube-controller-manager-test-preload-547899" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:26:38.265786  422337 pod_ready.go:83] waiting for pod "kube-proxy-t7rjs" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:26:38.270639  422337 pod_ready.go:94] pod "kube-proxy-t7rjs" is "Ready"
	I1213 10:26:38.270659  422337 pod_ready.go:86] duration metric: took 4.853113ms for pod "kube-proxy-t7rjs" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:26:38.272514  422337 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-547899" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:26:38.546225  422337 pod_ready.go:94] pod "kube-scheduler-test-preload-547899" is "Ready"
	I1213 10:26:38.546261  422337 pod_ready.go:86] duration metric: took 273.726274ms for pod "kube-scheduler-test-preload-547899" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:26:38.546276  422337 pod_ready.go:40] duration metric: took 14.31861844s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 10:26:38.591140  422337 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 10:26:38.592887  422337 out.go:179] * Done! kubectl is now configured to use "test-preload-547899" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.361295175Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765621599361270768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0296801e-6e88-464f-b702-e97034724568 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.362245136Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=123b4a11-c915-458b-aaf7-084119aafaf2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.362319469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=123b4a11-c915-458b-aaf7-084119aafaf2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.362477711Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36a93508d72bd70b0498ed62c26c28eeeab1f94dc1006eda9b572f55246e697a,PodSandboxId:84d293ef225012c056fa24388d599278df2d1cb5ef16013df0ffad586a527f27,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765621585416566027,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-lbqwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ced6945d-c35e-4286-a345-104ac8208f20,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7d286b7515e88a0d3224ca075a01af1257294f2c6c0d269e99dbd9376b3b53,PodSandboxId:b45aa6c90926bca70dbd0a1c8aac087671a1af69a3df327c50f6cdf09ee23c99,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765621582009671245,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t7rjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd855b-5482-45fc-a273-d83e9819bed6,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbb508b1f6e50519557c4e54218e4c6c5fd53b4a16abdcf1296fbe3d480fb4b,PodSandboxId:7001d2e40e58118ff658577eca819c7be48f145f84695483f836223956464569,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765621581994354122,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2d34e4-32e5-4226-bb45-7c22b5d162a9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261c7260a9c0591eeffec69e29f8cad49b05ab344f7d9875f2728d435138b5c2,PodSandboxId:499cd7406acc4daa2eecac3fec0350a8001e6dc7725ec295473086b8831a3c6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765621578503579093,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-547899,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1145eff728252410308f4f0d73bfdf,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6858a46ff75e43b72e55ada60ed786090cb36a770fb0aa225ae411efac4af51,PodSandboxId:426b4241eebac081fae9da008e2e3e6671cd2db668cb652954e6afcf96059e90,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4
b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765621578475677252,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-547899,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286577401bf44752d4d973ef159420ea,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4ad7fb1bf52dcb2af7669e68f13804e8945073c351671bf680ed598474311b,PodSandboxId:fd808a27d0954f34960004533ed58bd00d7290f0393ac841da8d38705e199117,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765621578422618779,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-547899,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9e2578e24477f9dd68f3c39fec517e0,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2719e9b0fc1b1781f1cba1b9adb29e92b9266d603d86b44b0c2e5e6a16a1821,PodSandboxId:e17381e0f9789defce5100231c1ad31df26977af04d5acc5b9efe2f24bb9a527,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765621578444699692,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-547899,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15aaac6316a0c8b6152bf8d7365e1bdb,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=123b4a11-c915-458b-aaf7-084119aafaf2 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.395132245Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c88f755-4a82-4861-9775-4a6a881c22f4 name=/runtime.v1.RuntimeService/Version
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.395223573Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c88f755-4a82-4861-9775-4a6a881c22f4 name=/runtime.v1.RuntimeService/Version
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.396582440Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=338dc402-479b-4107-91b0-01471a65d0f0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.397106934Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765621599397084614,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=338dc402-479b-4107-91b0-01471a65d0f0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.398000855Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2c7fbe8-f221-4ee9-939c-bfc161157793 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.398282546Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2c7fbe8-f221-4ee9-939c-bfc161157793 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.398582889Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36a93508d72bd70b0498ed62c26c28eeeab1f94dc1006eda9b572f55246e697a,PodSandboxId:84d293ef225012c056fa24388d599278df2d1cb5ef16013df0ffad586a527f27,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765621585416566027,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-lbqwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ced6945d-c35e-4286-a345-104ac8208f20,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7d286b7515e88a0d3224ca075a01af1257294f2c6c0d269e99dbd9376b3b53,PodSandboxId:b45aa6c90926bca70dbd0a1c8aac087671a1af69a3df327c50f6cdf09ee23c99,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765621582009671245,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t7rjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd855b-5482-45fc-a273-d83e9819bed6,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbb508b1f6e50519557c4e54218e4c6c5fd53b4a16abdcf1296fbe3d480fb4b,PodSandboxId:7001d2e40e58118ff658577eca819c7be48f145f84695483f836223956464569,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765621581994354122,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2d34e4-32e5-4226-bb45-7c22b5d162a9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261c7260a9c0591eeffec69e29f8cad49b05ab344f7d9875f2728d435138b5c2,PodSandboxId:499cd7406acc4daa2eecac3fec0350a8001e6dc7725ec295473086b8831a3c6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765621578503579093,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-547899,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1145eff728252410308f4f0d73bfdf,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6858a46ff75e43b72e55ada60ed786090cb36a770fb0aa225ae411efac4af51,PodSandboxId:426b4241eebac081fae9da008e2e3e6671cd2db668cb652954e6afcf96059e90,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4
b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765621578475677252,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-547899,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286577401bf44752d4d973ef159420ea,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4ad7fb1bf52dcb2af7669e68f13804e8945073c351671bf680ed598474311b,PodSandboxId:fd808a27d0954f34960004533ed58bd00d7290f0393ac841da8d38705e199117,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765621578422618779,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-547899,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9e2578e24477f9dd68f3c39fec517e0,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2719e9b0fc1b1781f1cba1b9adb29e92b9266d603d86b44b0c2e5e6a16a1821,PodSandboxId:e17381e0f9789defce5100231c1ad31df26977af04d5acc5b9efe2f24bb9a527,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765621578444699692,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-547899,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15aaac6316a0c8b6152bf8d7365e1bdb,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2c7fbe8-f221-4ee9-939c-bfc161157793 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.431065350Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=22718955-495a-4495-8729-d4f35a871e90 name=/runtime.v1.RuntimeService/Version
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.431137960Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22718955-495a-4495-8729-d4f35a871e90 name=/runtime.v1.RuntimeService/Version
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.432957434Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43f89621-cc17-4c84-8ee7-bd57c65e312c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.433450285Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765621599433429352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43f89621-cc17-4c84-8ee7-bd57c65e312c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.434541357Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02fdd881-a54e-4ad4-82ed-ec66b0b305b5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.434626202Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02fdd881-a54e-4ad4-82ed-ec66b0b305b5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.434786379Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36a93508d72bd70b0498ed62c26c28eeeab1f94dc1006eda9b572f55246e697a,PodSandboxId:84d293ef225012c056fa24388d599278df2d1cb5ef16013df0ffad586a527f27,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765621585416566027,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-lbqwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ced6945d-c35e-4286-a345-104ac8208f20,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7d286b7515e88a0d3224ca075a01af1257294f2c6c0d269e99dbd9376b3b53,PodSandboxId:b45aa6c90926bca70dbd0a1c8aac087671a1af69a3df327c50f6cdf09ee23c99,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765621582009671245,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t7rjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd855b-5482-45fc-a273-d83e9819bed6,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbb508b1f6e50519557c4e54218e4c6c5fd53b4a16abdcf1296fbe3d480fb4b,PodSandboxId:7001d2e40e58118ff658577eca819c7be48f145f84695483f836223956464569,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765621581994354122,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2d34e4-32e5-4226-bb45-7c22b5d162a9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261c7260a9c0591eeffec69e29f8cad49b05ab344f7d9875f2728d435138b5c2,PodSandboxId:499cd7406acc4daa2eecac3fec0350a8001e6dc7725ec295473086b8831a3c6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765621578503579093,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-547899,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1145eff728252410308f4f0d73bfdf,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6858a46ff75e43b72e55ada60ed786090cb36a770fb0aa225ae411efac4af51,PodSandboxId:426b4241eebac081fae9da008e2e3e6671cd2db668cb652954e6afcf96059e90,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4
b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765621578475677252,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-547899,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286577401bf44752d4d973ef159420ea,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4ad7fb1bf52dcb2af7669e68f13804e8945073c351671bf680ed598474311b,PodSandboxId:fd808a27d0954f34960004533ed58bd00d7290f0393ac841da8d38705e199117,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765621578422618779,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-547899,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9e2578e24477f9dd68f3c39fec517e0,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2719e9b0fc1b1781f1cba1b9adb29e92b9266d603d86b44b0c2e5e6a16a1821,PodSandboxId:e17381e0f9789defce5100231c1ad31df26977af04d5acc5b9efe2f24bb9a527,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765621578444699692,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-547899,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15aaac6316a0c8b6152bf8d7365e1bdb,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02fdd881-a54e-4ad4-82ed-ec66b0b305b5 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.462842449Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8524afa1-fb96-4c4c-9bba-2440eeabb4eb name=/runtime.v1.RuntimeService/Version
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.462959596Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8524afa1-fb96-4c4c-9bba-2440eeabb4eb name=/runtime.v1.RuntimeService/Version
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.464741624Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=edef31d9-6057-426d-b958-8040f8153973 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.465534643Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765621599465509484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=edef31d9-6057-426d-b958-8040f8153973 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.466802887Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8fa743a0-af2d-46f6-bba6-ed0da9299d9c name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.466910327Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8fa743a0-af2d-46f6-bba6-ed0da9299d9c name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:26:39 test-preload-547899 crio[833]: time="2025-12-13 10:26:39.467129253Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36a93508d72bd70b0498ed62c26c28eeeab1f94dc1006eda9b572f55246e697a,PodSandboxId:84d293ef225012c056fa24388d599278df2d1cb5ef16013df0ffad586a527f27,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765621585416566027,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-lbqwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ced6945d-c35e-4286-a345-104ac8208f20,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7d286b7515e88a0d3224ca075a01af1257294f2c6c0d269e99dbd9376b3b53,PodSandboxId:b45aa6c90926bca70dbd0a1c8aac087671a1af69a3df327c50f6cdf09ee23c99,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765621582009671245,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t7rjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34cd855b-5482-45fc-a273-d83e9819bed6,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecbb508b1f6e50519557c4e54218e4c6c5fd53b4a16abdcf1296fbe3d480fb4b,PodSandboxId:7001d2e40e58118ff658577eca819c7be48f145f84695483f836223956464569,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765621581994354122,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2d34e4-32e5-4226-bb45-7c22b5d162a9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261c7260a9c0591eeffec69e29f8cad49b05ab344f7d9875f2728d435138b5c2,PodSandboxId:499cd7406acc4daa2eecac3fec0350a8001e6dc7725ec295473086b8831a3c6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765621578503579093,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-547899,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1145eff728252410308f4f0d73bfdf,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6858a46ff75e43b72e55ada60ed786090cb36a770fb0aa225ae411efac4af51,PodSandboxId:426b4241eebac081fae9da008e2e3e6671cd2db668cb652954e6afcf96059e90,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4
b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765621578475677252,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-547899,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286577401bf44752d4d973ef159420ea,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4ad7fb1bf52dcb2af7669e68f13804e8945073c351671bf680ed598474311b,PodSandboxId:fd808a27d0954f34960004533ed58bd00d7290f0393ac841da8d38705e199117,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765621578422618779,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-547899,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9e2578e24477f9dd68f3c39fec517e0,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2719e9b0fc1b1781f1cba1b9adb29e92b9266d603d86b44b0c2e5e6a16a1821,PodSandboxId:e17381e0f9789defce5100231c1ad31df26977af04d5acc5b9efe2f24bb9a527,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765621578444699692,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-547899,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15aaac6316a0c8b6152bf8d7365e1bdb,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8fa743a0-af2d-46f6-bba6-ed0da9299d9c name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	36a93508d72bd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago      Running             coredns                   1                   84d293ef22501       coredns-66bc5c9577-lbqwg                      kube-system
	8b7d286b7515e       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   17 seconds ago      Running             kube-proxy                1                   b45aa6c90926b       kube-proxy-t7rjs                              kube-system
	ecbb508b1f6e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 seconds ago      Running             storage-provisioner       1                   7001d2e40e581       storage-provisioner                           kube-system
	261c7260a9c05       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   21 seconds ago      Running             kube-controller-manager   1                   499cd7406acc4       kube-controller-manager-test-preload-547899   kube-system
	b6858a46ff75e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   21 seconds ago      Running             etcd                      1                   426b4241eebac       etcd-test-preload-547899                      kube-system
	d2719e9b0fc1b       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   21 seconds ago      Running             kube-apiserver            1                   e17381e0f9789       kube-apiserver-test-preload-547899            kube-system
	ff4ad7fb1bf52       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   21 seconds ago      Running             kube-scheduler            1                   fd808a27d0954       kube-scheduler-test-preload-547899            kube-system
	
	
	==> coredns [36a93508d72bd70b0498ed62c26c28eeeab1f94dc1006eda9b572f55246e697a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38299 - 20835 "HINFO IN 5878588875361139168.2521450458523970259. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.067466022s
	
	
	==> describe nodes <==
	Name:               test-preload-547899
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-547899
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
	                    minikube.k8s.io/name=test-preload-547899
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T10_25_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 10:25:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-547899
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 10:26:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 10:26:22 +0000   Sat, 13 Dec 2025 10:25:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 10:26:22 +0000   Sat, 13 Dec 2025 10:25:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 10:26:22 +0000   Sat, 13 Dec 2025 10:25:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 10:26:22 +0000   Sat, 13 Dec 2025 10:26:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    test-preload-547899
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 7836aa9d8409416e88607a54a4560312
	  System UUID:                7836aa9d-8409-416e-8860-7a54a4560312
	  Boot ID:                    563feb8a-ff80-4f96-9221-c343bed0d2e4
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-lbqwg                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     67s
	  kube-system                 etcd-test-preload-547899                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         72s
	  kube-system                 kube-apiserver-test-preload-547899             250m (12%)    0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-controller-manager-test-preload-547899    200m (10%)    0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-proxy-t7rjs                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 kube-scheduler-test-preload-547899             100m (5%)     0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 66s                kube-proxy       
	  Normal   Starting                 17s                kube-proxy       
	  Normal   Starting                 79s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  79s (x8 over 79s)  kubelet          Node test-preload-547899 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    79s (x8 over 79s)  kubelet          Node test-preload-547899 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     79s (x7 over 79s)  kubelet          Node test-preload-547899 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    72s                kubelet          Node test-preload-547899 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  72s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  72s                kubelet          Node test-preload-547899 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     72s                kubelet          Node test-preload-547899 status is now: NodeHasSufficientPID
	  Normal   Starting                 72s                kubelet          Starting kubelet.
	  Normal   NodeReady                71s                kubelet          Node test-preload-547899 status is now: NodeReady
	  Normal   RegisteredNode           68s                node-controller  Node test-preload-547899 event: Registered Node test-preload-547899 in Controller
	  Normal   Starting                 23s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node test-preload-547899 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node test-preload-547899 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node test-preload-547899 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18s                kubelet          Node test-preload-547899 has been rebooted, boot id: 563feb8a-ff80-4f96-9221-c343bed0d2e4
	  Normal   RegisteredNode           15s                node-controller  Node test-preload-547899 event: Registered Node test-preload-547899 in Controller
	
	
	==> dmesg <==
	[Dec13 10:25] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Dec13 10:26] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.006677] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.985413] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.120082] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.097754] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.465494] kauditd_printk_skb: 168 callbacks suppressed
	[  +5.215300] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [b6858a46ff75e43b72e55ada60ed786090cb36a770fb0aa225ae411efac4af51] <==
	{"level":"warn","ts":"2025-12-13T10:26:20.163356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.212795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.237854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.252381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.283187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.308042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.332728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.352953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.368228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.388077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.398869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.415393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.432292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.452415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.472284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.490280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.504632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.525502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.537943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.555700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.574469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.598797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.632848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.651084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:20.720173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44664","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:26:39 up 0 min,  0 users,  load average: 0.52, 0.14, 0.05
	Linux test-preload-547899 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [d2719e9b0fc1b1781f1cba1b9adb29e92b9266d603d86b44b0c2e5e6a16a1821] <==
	I1213 10:26:21.521337       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 10:26:21.523619       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 10:26:21.523700       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 10:26:21.523823       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1213 10:26:21.523916       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1213 10:26:21.523961       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 10:26:21.527774       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 10:26:21.529685       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1213 10:26:21.529745       1 policy_source.go:240] refreshing policies
	E1213 10:26:21.529930       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1213 10:26:21.536672       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 10:26:21.539125       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1213 10:26:21.539166       1 aggregator.go:171] initial CRD sync complete...
	I1213 10:26:21.539173       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 10:26:21.539178       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 10:26:21.539183       1 cache.go:39] Caches are synced for autoregister controller
	I1213 10:26:21.606853       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 10:26:22.324681       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 10:26:22.871796       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 10:26:22.906804       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 10:26:22.939082       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 10:26:22.946400       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 10:26:24.861215       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 10:26:25.161541       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 10:26:25.215962       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [261c7260a9c0591eeffec69e29f8cad49b05ab344f7d9875f2728d435138b5c2] <==
	I1213 10:26:24.849218       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1213 10:26:24.850467       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 10:26:24.850500       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 10:26:24.851680       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1213 10:26:24.856105       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1213 10:26:24.857332       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1213 10:26:24.857538       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1213 10:26:24.857569       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 10:26:24.857798       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 10:26:24.857866       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 10:26:24.857892       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 10:26:24.859126       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 10:26:24.861508       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 10:26:24.868303       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 10:26:24.872629       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 10:26:24.873917       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 10:26:24.877253       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 10:26:24.883634       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 10:26:24.884867       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 10:26:24.894230       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 10:26:24.901613       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 10:26:24.902751       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 10:26:24.917749       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 10:26:24.917779       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 10:26:24.917786       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [8b7d286b7515e88a0d3224ca075a01af1257294f2c6c0d269e99dbd9376b3b53] <==
	I1213 10:26:22.193221       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 10:26:22.294429       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 10:26:22.294469       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.150"]
	E1213 10:26:22.294553       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 10:26:22.332505       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 10:26:22.332657       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 10:26:22.332852       1 server_linux.go:132] "Using iptables Proxier"
	I1213 10:26:22.346820       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 10:26:22.347737       1 server.go:527] "Version info" version="v1.34.2"
	I1213 10:26:22.347786       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 10:26:22.355530       1 config.go:200] "Starting service config controller"
	I1213 10:26:22.355543       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 10:26:22.355560       1 config.go:106] "Starting endpoint slice config controller"
	I1213 10:26:22.355564       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 10:26:22.355573       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 10:26:22.355578       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 10:26:22.355818       1 config.go:309] "Starting node config controller"
	I1213 10:26:22.355917       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 10:26:22.456865       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 10:26:22.457345       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 10:26:22.457398       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 10:26:22.457045       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [ff4ad7fb1bf52dcb2af7669e68f13804e8945073c351671bf680ed598474311b] <==
	I1213 10:26:20.342408       1 serving.go:386] Generated self-signed cert in-memory
	W1213 10:26:21.379903       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 10:26:21.379942       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 10:26:21.379956       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 10:26:21.379963       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 10:26:21.463662       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 10:26:21.463751       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 10:26:21.466981       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 10:26:21.467882       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 10:26:21.467933       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 10:26:21.467970       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 10:26:21.568748       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 10:26:21 test-preload-547899 kubelet[1179]: E1213 10:26:21.580854    1179 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-test-preload-547899\" already exists" pod="kube-system/etcd-test-preload-547899"
	Dec 13 10:26:21 test-preload-547899 kubelet[1179]: I1213 10:26:21.584186    1179 kubelet_node_status.go:124] "Node was previously registered" node="test-preload-547899"
	Dec 13 10:26:21 test-preload-547899 kubelet[1179]: I1213 10:26:21.584281    1179 kubelet_node_status.go:78] "Successfully registered node" node="test-preload-547899"
	Dec 13 10:26:21 test-preload-547899 kubelet[1179]: I1213 10:26:21.584301    1179 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 13 10:26:21 test-preload-547899 kubelet[1179]: I1213 10:26:21.585825    1179 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 13 10:26:21 test-preload-547899 kubelet[1179]: I1213 10:26:21.587579    1179 setters.go:543] "Node became not ready" node="test-preload-547899" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T10:26:21Z","lastTransitionTime":"2025-12-13T10:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Dec 13 10:26:21 test-preload-547899 kubelet[1179]: I1213 10:26:21.589853    1179 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 13 10:26:21 test-preload-547899 kubelet[1179]: I1213 10:26:21.597648    1179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34cd855b-5482-45fc-a273-d83e9819bed6-lib-modules\") pod \"kube-proxy-t7rjs\" (UID: \"34cd855b-5482-45fc-a273-d83e9819bed6\") " pod="kube-system/kube-proxy-t7rjs"
	Dec 13 10:26:21 test-preload-547899 kubelet[1179]: I1213 10:26:21.598590    1179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34cd855b-5482-45fc-a273-d83e9819bed6-xtables-lock\") pod \"kube-proxy-t7rjs\" (UID: \"34cd855b-5482-45fc-a273-d83e9819bed6\") " pod="kube-system/kube-proxy-t7rjs"
	Dec 13 10:26:21 test-preload-547899 kubelet[1179]: I1213 10:26:21.598618    1179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2c2d34e4-32e5-4226-bb45-7c22b5d162a9-tmp\") pod \"storage-provisioner\" (UID: \"2c2d34e4-32e5-4226-bb45-7c22b5d162a9\") " pod="kube-system/storage-provisioner"
	Dec 13 10:26:21 test-preload-547899 kubelet[1179]: E1213 10:26:21.598893    1179 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 13 10:26:21 test-preload-547899 kubelet[1179]: E1213 10:26:21.598983    1179 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ced6945d-c35e-4286-a345-104ac8208f20-config-volume podName:ced6945d-c35e-4286-a345-104ac8208f20 nodeName:}" failed. No retries permitted until 2025-12-13 10:26:22.098964729 +0000 UTC m=+5.624384825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ced6945d-c35e-4286-a345-104ac8208f20-config-volume") pod "coredns-66bc5c9577-lbqwg" (UID: "ced6945d-c35e-4286-a345-104ac8208f20") : object "kube-system"/"coredns" not registered
	Dec 13 10:26:21 test-preload-547899 kubelet[1179]: I1213 10:26:21.715954    1179 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-547899"
	Dec 13 10:26:21 test-preload-547899 kubelet[1179]: I1213 10:26:21.716110    1179 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-547899"
	Dec 13 10:26:21 test-preload-547899 kubelet[1179]: E1213 10:26:21.731688    1179 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-test-preload-547899\" already exists" pod="kube-system/etcd-test-preload-547899"
	Dec 13 10:26:21 test-preload-547899 kubelet[1179]: E1213 10:26:21.732091    1179 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-547899\" already exists" pod="kube-system/kube-scheduler-test-preload-547899"
	Dec 13 10:26:22 test-preload-547899 kubelet[1179]: E1213 10:26:22.104317    1179 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 13 10:26:22 test-preload-547899 kubelet[1179]: E1213 10:26:22.104376    1179 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ced6945d-c35e-4286-a345-104ac8208f20-config-volume podName:ced6945d-c35e-4286-a345-104ac8208f20 nodeName:}" failed. No retries permitted until 2025-12-13 10:26:23.104362964 +0000 UTC m=+6.629783061 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ced6945d-c35e-4286-a345-104ac8208f20-config-volume") pod "coredns-66bc5c9577-lbqwg" (UID: "ced6945d-c35e-4286-a345-104ac8208f20") : object "kube-system"/"coredns" not registered
	Dec 13 10:26:22 test-preload-547899 kubelet[1179]: I1213 10:26:22.966235    1179 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 13 10:26:23 test-preload-547899 kubelet[1179]: E1213 10:26:23.112617    1179 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 13 10:26:23 test-preload-547899 kubelet[1179]: E1213 10:26:23.112722    1179 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ced6945d-c35e-4286-a345-104ac8208f20-config-volume podName:ced6945d-c35e-4286-a345-104ac8208f20 nodeName:}" failed. No retries permitted until 2025-12-13 10:26:25.112706923 +0000 UTC m=+8.638127021 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ced6945d-c35e-4286-a345-104ac8208f20-config-volume") pod "coredns-66bc5c9577-lbqwg" (UID: "ced6945d-c35e-4286-a345-104ac8208f20") : object "kube-system"/"coredns" not registered
	Dec 13 10:26:26 test-preload-547899 kubelet[1179]: E1213 10:26:26.652682    1179 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765621586652308479 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 13 10:26:26 test-preload-547899 kubelet[1179]: E1213 10:26:26.652704    1179 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765621586652308479 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 13 10:26:36 test-preload-547899 kubelet[1179]: E1213 10:26:36.656144    1179 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765621596655638178 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 13 10:26:36 test-preload-547899 kubelet[1179]: E1213 10:26:36.656166    1179 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765621596655638178 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	
	
	==> storage-provisioner [ecbb508b1f6e50519557c4e54218e4c6c5fd53b4a16abdcf1296fbe3d480fb4b] <==
	I1213 10:26:22.103378       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-547899 -n test-preload-547899
helpers_test.go:270: (dbg) Run:  kubectl --context test-preload-547899 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:176: Cleaning up "test-preload-547899" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-547899
--- FAIL: TestPreload (119.81s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (59.96s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-617427 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-617427 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.595068652s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-617427] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-617427" primary control-plane node in "pause-617427" cluster
	* Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-617427" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:31:27.267460  426140 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:31:27.267623  426140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:31:27.267632  426140 out.go:374] Setting ErrFile to fd 2...
	I1213 10:31:27.267640  426140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:31:27.267921  426140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 10:31:27.268483  426140 out.go:368] Setting JSON to false
	I1213 10:31:27.269791  426140 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8036,"bootTime":1765613851,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 10:31:27.269863  426140 start.go:143] virtualization: kvm guest
	I1213 10:31:27.273472  426140 out.go:179] * [pause-617427] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 10:31:27.274856  426140 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:31:27.274844  426140 notify.go:221] Checking for updates...
	I1213 10:31:27.276791  426140 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:31:27.278394  426140 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 10:31:27.279876  426140 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	I1213 10:31:27.281362  426140 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 10:31:27.282681  426140 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:31:27.284623  426140 config.go:182] Loaded profile config "pause-617427": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:31:27.285457  426140 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:31:27.798638  426140 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 10:31:27.800039  426140 start.go:309] selected driver: kvm2
	I1213 10:31:27.800060  426140 start.go:927] validating driver "kvm2" against &{Name:pause-617427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-617427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:31:27.800258  426140 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:31:27.801730  426140 cni.go:84] Creating CNI manager for ""
	I1213 10:31:27.801814  426140 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 10:31:27.801872  426140 start.go:353] cluster config:
	{Name:pause-617427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-617427 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:31:27.802044  426140 iso.go:125] acquiring lock: {Name:mk4ce8bfab58620efe86d1c7a68d79ed9c81b6ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:31:27.805468  426140 out.go:179] * Starting "pause-617427" primary control-plane node in "pause-617427" cluster
	I1213 10:31:27.806543  426140 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 10:31:27.806596  426140 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-387918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 10:31:27.806607  426140 cache.go:65] Caching tarball of preloaded images
	I1213 10:31:27.806744  426140 preload.go:238] Found /home/jenkins/minikube-integration/22127-387918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 10:31:27.806757  426140 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 10:31:27.806887  426140 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/pause-617427/config.json ...
	I1213 10:31:27.807108  426140 start.go:360] acquireMachinesLock for pause-617427: {Name:mk911c6c71130df32abbe489ec2f7be251c727ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 10:31:46.624169  426140 start.go:364] duration metric: took 18.81699209s to acquireMachinesLock for "pause-617427"
	I1213 10:31:46.624240  426140 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:31:46.624250  426140 fix.go:54] fixHost starting: 
	I1213 10:31:46.626901  426140 fix.go:112] recreateIfNeeded on pause-617427: state=Running err=<nil>
	W1213 10:31:46.626957  426140 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:31:46.628745  426140 out.go:252] * Updating the running kvm2 "pause-617427" VM ...
	I1213 10:31:46.628806  426140 machine.go:94] provisionDockerMachine start ...
	I1213 10:31:46.633616  426140 main.go:143] libmachine: domain pause-617427 has defined MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:46.634049  426140 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:7d:ed", ip: ""} in network mk-pause-617427: {Iface:virbr2 ExpiryTime:2025-12-13 11:30:23 +0000 UTC Type:0 Mac:52:54:00:41:7d:ed Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:pause-617427 Clientid:01:52:54:00:41:7d:ed}
	I1213 10:31:46.634077  426140 main.go:143] libmachine: domain pause-617427 has defined IP address 192.168.50.105 and MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:46.634350  426140 main.go:143] libmachine: Using SSH client type: native
	I1213 10:31:46.634689  426140 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I1213 10:31:46.634705  426140 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:31:46.755693  426140 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-617427
	
	I1213 10:31:46.755730  426140 buildroot.go:166] provisioning hostname "pause-617427"
	I1213 10:31:46.760477  426140 main.go:143] libmachine: domain pause-617427 has defined MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:46.761027  426140 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:7d:ed", ip: ""} in network mk-pause-617427: {Iface:virbr2 ExpiryTime:2025-12-13 11:30:23 +0000 UTC Type:0 Mac:52:54:00:41:7d:ed Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:pause-617427 Clientid:01:52:54:00:41:7d:ed}
	I1213 10:31:46.761084  426140 main.go:143] libmachine: domain pause-617427 has defined IP address 192.168.50.105 and MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:46.762215  426140 main.go:143] libmachine: Using SSH client type: native
	I1213 10:31:46.762557  426140 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I1213 10:31:46.762576  426140 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-617427 && echo "pause-617427" | sudo tee /etc/hostname
	I1213 10:31:46.906275  426140 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-617427
	
	I1213 10:31:46.909865  426140 main.go:143] libmachine: domain pause-617427 has defined MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:46.910413  426140 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:7d:ed", ip: ""} in network mk-pause-617427: {Iface:virbr2 ExpiryTime:2025-12-13 11:30:23 +0000 UTC Type:0 Mac:52:54:00:41:7d:ed Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:pause-617427 Clientid:01:52:54:00:41:7d:ed}
	I1213 10:31:46.910475  426140 main.go:143] libmachine: domain pause-617427 has defined IP address 192.168.50.105 and MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:46.910733  426140 main.go:143] libmachine: Using SSH client type: native
	I1213 10:31:46.910962  426140 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I1213 10:31:46.910982  426140 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-617427' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-617427/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-617427' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:31:47.035131  426140 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:31:47.035183  426140 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22127-387918/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-387918/.minikube}
	I1213 10:31:47.035224  426140 buildroot.go:174] setting up certificates
	I1213 10:31:47.035238  426140 provision.go:84] configureAuth start
	I1213 10:31:47.038934  426140 main.go:143] libmachine: domain pause-617427 has defined MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:47.039583  426140 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:7d:ed", ip: ""} in network mk-pause-617427: {Iface:virbr2 ExpiryTime:2025-12-13 11:30:23 +0000 UTC Type:0 Mac:52:54:00:41:7d:ed Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:pause-617427 Clientid:01:52:54:00:41:7d:ed}
	I1213 10:31:47.039612  426140 main.go:143] libmachine: domain pause-617427 has defined IP address 192.168.50.105 and MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:47.042430  426140 main.go:143] libmachine: domain pause-617427 has defined MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:47.042944  426140 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:7d:ed", ip: ""} in network mk-pause-617427: {Iface:virbr2 ExpiryTime:2025-12-13 11:30:23 +0000 UTC Type:0 Mac:52:54:00:41:7d:ed Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:pause-617427 Clientid:01:52:54:00:41:7d:ed}
	I1213 10:31:47.042970  426140 main.go:143] libmachine: domain pause-617427 has defined IP address 192.168.50.105 and MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:47.043220  426140 provision.go:143] copyHostCerts
	I1213 10:31:47.043292  426140 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-387918/.minikube/ca.pem, removing ...
	I1213 10:31:47.043311  426140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-387918/.minikube/ca.pem
	I1213 10:31:47.043397  426140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-387918/.minikube/ca.pem (1078 bytes)
	I1213 10:31:47.043523  426140 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-387918/.minikube/cert.pem, removing ...
	I1213 10:31:47.043533  426140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-387918/.minikube/cert.pem
	I1213 10:31:47.043565  426140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-387918/.minikube/cert.pem (1123 bytes)
	I1213 10:31:47.043642  426140 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-387918/.minikube/key.pem, removing ...
	I1213 10:31:47.043653  426140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-387918/.minikube/key.pem
	I1213 10:31:47.043690  426140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-387918/.minikube/key.pem (1675 bytes)
	I1213 10:31:47.043755  426140 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-387918/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca-key.pem org=jenkins.pause-617427 san=[127.0.0.1 192.168.50.105 localhost minikube pause-617427]
	I1213 10:31:47.250923  426140 provision.go:177] copyRemoteCerts
	I1213 10:31:47.250989  426140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:31:47.254075  426140 main.go:143] libmachine: domain pause-617427 has defined MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:47.254785  426140 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:7d:ed", ip: ""} in network mk-pause-617427: {Iface:virbr2 ExpiryTime:2025-12-13 11:30:23 +0000 UTC Type:0 Mac:52:54:00:41:7d:ed Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:pause-617427 Clientid:01:52:54:00:41:7d:ed}
	I1213 10:31:47.254826  426140 main.go:143] libmachine: domain pause-617427 has defined IP address 192.168.50.105 and MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:47.255082  426140 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/pause-617427/id_rsa Username:docker}
	I1213 10:31:47.358712  426140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1213 10:31:47.403495  426140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 10:31:47.439607  426140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:31:47.481785  426140 provision.go:87] duration metric: took 446.526806ms to configureAuth
	I1213 10:31:47.481824  426140 buildroot.go:189] setting minikube options for container-runtime
	I1213 10:31:47.482097  426140 config.go:182] Loaded profile config "pause-617427": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:31:47.485789  426140 main.go:143] libmachine: domain pause-617427 has defined MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:47.486426  426140 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:7d:ed", ip: ""} in network mk-pause-617427: {Iface:virbr2 ExpiryTime:2025-12-13 11:30:23 +0000 UTC Type:0 Mac:52:54:00:41:7d:ed Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:pause-617427 Clientid:01:52:54:00:41:7d:ed}
	I1213 10:31:47.486489  426140 main.go:143] libmachine: domain pause-617427 has defined IP address 192.168.50.105 and MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:47.486737  426140 main.go:143] libmachine: Using SSH client type: native
	I1213 10:31:47.486963  426140 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I1213 10:31:47.486980  426140 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:31:53.120091  426140 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:31:53.120132  426140 machine.go:97] duration metric: took 6.491312815s to provisionDockerMachine
	I1213 10:31:53.120150  426140 start.go:293] postStartSetup for "pause-617427" (driver="kvm2")
	I1213 10:31:53.120167  426140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:31:53.120281  426140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:31:53.123940  426140 main.go:143] libmachine: domain pause-617427 has defined MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:53.124416  426140 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:7d:ed", ip: ""} in network mk-pause-617427: {Iface:virbr2 ExpiryTime:2025-12-13 11:30:23 +0000 UTC Type:0 Mac:52:54:00:41:7d:ed Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:pause-617427 Clientid:01:52:54:00:41:7d:ed}
	I1213 10:31:53.124445  426140 main.go:143] libmachine: domain pause-617427 has defined IP address 192.168.50.105 and MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:53.124586  426140 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/pause-617427/id_rsa Username:docker}
	I1213 10:31:53.213520  426140 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:31:53.218552  426140 info.go:137] Remote host: Buildroot 2025.02
	I1213 10:31:53.218584  426140 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-387918/.minikube/addons for local assets ...
	I1213 10:31:53.218670  426140 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-387918/.minikube/files for local assets ...
	I1213 10:31:53.218760  426140 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/ssl/certs/3918772.pem -> 3918772.pem in /etc/ssl/certs
	I1213 10:31:53.218850  426140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:31:53.231072  426140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/ssl/certs/3918772.pem --> /etc/ssl/certs/3918772.pem (1708 bytes)
	I1213 10:31:53.264237  426140 start.go:296] duration metric: took 144.06318ms for postStartSetup
	I1213 10:31:53.264298  426140 fix.go:56] duration metric: took 6.64004918s for fixHost
	I1213 10:31:53.267273  426140 main.go:143] libmachine: domain pause-617427 has defined MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:53.267777  426140 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:7d:ed", ip: ""} in network mk-pause-617427: {Iface:virbr2 ExpiryTime:2025-12-13 11:30:23 +0000 UTC Type:0 Mac:52:54:00:41:7d:ed Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:pause-617427 Clientid:01:52:54:00:41:7d:ed}
	I1213 10:31:53.267811  426140 main.go:143] libmachine: domain pause-617427 has defined IP address 192.168.50.105 and MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:53.268016  426140 main.go:143] libmachine: Using SSH client type: native
	I1213 10:31:53.268262  426140 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I1213 10:31:53.268273  426140 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1213 10:31:53.383246  426140 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765621913.378639769
	
	I1213 10:31:53.383280  426140 fix.go:216] guest clock: 1765621913.378639769
	I1213 10:31:53.383292  426140 fix.go:229] Guest: 2025-12-13 10:31:53.378639769 +0000 UTC Remote: 2025-12-13 10:31:53.264303911 +0000 UTC m=+26.068594944 (delta=114.335858ms)
	I1213 10:31:53.383314  426140 fix.go:200] guest clock delta is within tolerance: 114.335858ms
	I1213 10:31:53.383341  426140 start.go:83] releasing machines lock for "pause-617427", held for 6.759103803s
	I1213 10:31:53.387533  426140 main.go:143] libmachine: domain pause-617427 has defined MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:53.388049  426140 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:7d:ed", ip: ""} in network mk-pause-617427: {Iface:virbr2 ExpiryTime:2025-12-13 11:30:23 +0000 UTC Type:0 Mac:52:54:00:41:7d:ed Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:pause-617427 Clientid:01:52:54:00:41:7d:ed}
	I1213 10:31:53.388081  426140 main.go:143] libmachine: domain pause-617427 has defined IP address 192.168.50.105 and MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:53.388669  426140 ssh_runner.go:195] Run: cat /version.json
	I1213 10:31:53.388788  426140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:31:53.392038  426140 main.go:143] libmachine: domain pause-617427 has defined MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:53.392481  426140 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:7d:ed", ip: ""} in network mk-pause-617427: {Iface:virbr2 ExpiryTime:2025-12-13 11:30:23 +0000 UTC Type:0 Mac:52:54:00:41:7d:ed Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:pause-617427 Clientid:01:52:54:00:41:7d:ed}
	I1213 10:31:53.392491  426140 main.go:143] libmachine: domain pause-617427 has defined MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:53.392523  426140 main.go:143] libmachine: domain pause-617427 has defined IP address 192.168.50.105 and MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:53.392741  426140 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/pause-617427/id_rsa Username:docker}
	I1213 10:31:53.393064  426140 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:7d:ed", ip: ""} in network mk-pause-617427: {Iface:virbr2 ExpiryTime:2025-12-13 11:30:23 +0000 UTC Type:0 Mac:52:54:00:41:7d:ed Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:pause-617427 Clientid:01:52:54:00:41:7d:ed}
	I1213 10:31:53.393098  426140 main.go:143] libmachine: domain pause-617427 has defined IP address 192.168.50.105 and MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:53.393262  426140 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/pause-617427/id_rsa Username:docker}
	I1213 10:31:53.482929  426140 ssh_runner.go:195] Run: systemctl --version
	I1213 10:31:53.511810  426140 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:31:53.670231  426140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:31:53.683501  426140 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:31:53.683601  426140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:31:53.695923  426140 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:31:53.695976  426140 start.go:496] detecting cgroup driver to use...
	I1213 10:31:53.696061  426140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:31:53.726378  426140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:31:53.751813  426140 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:31:53.751895  426140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:31:53.780556  426140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:31:53.801606  426140 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:31:54.011724  426140 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:31:54.192772  426140 docker.go:234] disabling docker service ...
	I1213 10:31:54.192865  426140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:31:54.226178  426140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:31:54.243548  426140 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:31:54.434926  426140 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:31:54.605129  426140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:31:54.626530  426140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:31:54.655838  426140 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:31:54.655937  426140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:31:54.669808  426140 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:31:54.669891  426140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:31:54.683652  426140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:31:54.697416  426140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:31:54.710659  426140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:31:54.725368  426140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:31:54.738672  426140 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:31:54.753454  426140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:31:54.767201  426140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:31:54.782931  426140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:31:54.798337  426140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:31:54.986538  426140 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:31:55.260798  426140 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:31:55.260886  426140 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:31:55.267839  426140 start.go:564] Will wait 60s for crictl version
	I1213 10:31:55.267930  426140 ssh_runner.go:195] Run: which crictl
	I1213 10:31:55.272418  426140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 10:31:55.323843  426140 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1213 10:31:55.323926  426140 ssh_runner.go:195] Run: crio --version
	I1213 10:31:55.355176  426140 ssh_runner.go:195] Run: crio --version
	I1213 10:31:55.389039  426140 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1213 10:31:55.393806  426140 main.go:143] libmachine: domain pause-617427 has defined MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:55.394447  426140 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:7d:ed", ip: ""} in network mk-pause-617427: {Iface:virbr2 ExpiryTime:2025-12-13 11:30:23 +0000 UTC Type:0 Mac:52:54:00:41:7d:ed Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:pause-617427 Clientid:01:52:54:00:41:7d:ed}
	I1213 10:31:55.394474  426140 main.go:143] libmachine: domain pause-617427 has defined IP address 192.168.50.105 and MAC address 52:54:00:41:7d:ed in network mk-pause-617427
	I1213 10:31:55.394729  426140 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1213 10:31:55.399593  426140 kubeadm.go:884] updating cluster {Name:pause-617427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-617427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:31:55.399838  426140 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 10:31:55.399916  426140 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:31:55.441697  426140 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:31:55.441728  426140 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:31:55.441806  426140 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:31:55.480074  426140 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:31:55.480097  426140 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:31:55.480105  426140 kubeadm.go:935] updating node { 192.168.50.105 8443 v1.34.2 crio true true} ...
	I1213 10:31:55.480239  426140 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-617427 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-617427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:31:55.480350  426140 ssh_runner.go:195] Run: crio config
	I1213 10:31:55.536524  426140 cni.go:84] Creating CNI manager for ""
	I1213 10:31:55.536566  426140 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 10:31:55.536590  426140 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:31:55.536626  426140 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-617427 NodeName:pause-617427 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:31:55.536792  426140 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-617427"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.105"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:31:55.536877  426140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 10:31:55.555158  426140 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:31:55.555265  426140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:31:55.599405  426140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1213 10:31:55.663401  426140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 10:31:55.741121  426140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1213 10:31:55.809708  426140 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I1213 10:31:55.822476  426140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:31:56.178379  426140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:31:56.220601  426140 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/pause-617427 for IP: 192.168.50.105
	I1213 10:31:56.220637  426140 certs.go:195] generating shared ca certs ...
	I1213 10:31:56.220683  426140 certs.go:227] acquiring lock for ca certs: {Name:mkd63ae6418df38b62936a9f8faa40fdd87e4397 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:31:56.220921  426140 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-387918/.minikube/ca.key
	I1213 10:31:56.221004  426140 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.key
	I1213 10:31:56.221023  426140 certs.go:257] generating profile certs ...
	I1213 10:31:56.221174  426140 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/pause-617427/client.key
	I1213 10:31:56.221280  426140 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/pause-617427/apiserver.key.202491ce
	I1213 10:31:56.221364  426140 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/pause-617427/proxy-client.key
	I1213 10:31:56.221595  426140 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/391877.pem (1338 bytes)
	W1213 10:31:56.221655  426140 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-387918/.minikube/certs/391877_empty.pem, impossibly tiny 0 bytes
	I1213 10:31:56.221670  426140 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:31:56.221704  426140 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem (1078 bytes)
	I1213 10:31:56.221744  426140 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:31:56.221784  426140 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/key.pem (1675 bytes)
	I1213 10:31:56.221872  426140 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/ssl/certs/3918772.pem (1708 bytes)
	I1213 10:31:56.222781  426140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:31:56.292983  426140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:31:56.347507  426140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:31:56.422075  426140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 10:31:56.472758  426140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/pause-617427/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 10:31:56.549096  426140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/pause-617427/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:31:56.633857  426140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/pause-617427/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:31:56.755771  426140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/pause-617427/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 10:31:56.865525  426140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/ssl/certs/3918772.pem --> /usr/share/ca-certificates/3918772.pem (1708 bytes)
	I1213 10:31:56.975574  426140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:31:57.048675  426140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/certs/391877.pem --> /usr/share/ca-certificates/391877.pem (1338 bytes)
	I1213 10:31:57.126181  426140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:31:57.171481  426140 ssh_runner.go:195] Run: openssl version
	I1213 10:31:57.183413  426140 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:57.208493  426140 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:31:57.231624  426140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:57.242814  426140 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:57.242909  426140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:57.259556  426140 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:31:57.297410  426140 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/391877.pem
	I1213 10:31:57.341743  426140 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/391877.pem /etc/ssl/certs/391877.pem
	I1213 10:31:57.380162  426140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/391877.pem
	I1213 10:31:57.399006  426140 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 09:23 /usr/share/ca-certificates/391877.pem
	I1213 10:31:57.399080  426140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391877.pem
	I1213 10:31:57.413528  426140 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:31:57.435578  426140 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3918772.pem
	I1213 10:31:57.455971  426140 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3918772.pem /etc/ssl/certs/3918772.pem
	I1213 10:31:57.474798  426140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3918772.pem
	I1213 10:31:57.483606  426140 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 09:23 /usr/share/ca-certificates/3918772.pem
	I1213 10:31:57.483685  426140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3918772.pem
	I1213 10:31:57.499454  426140 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:31:57.515292  426140 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:31:57.521398  426140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:31:57.531457  426140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:31:57.540552  426140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:31:57.550532  426140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:31:57.561850  426140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:31:57.573346  426140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:31:57.589318  426140 kubeadm.go:401] StartCluster: {Name:pause-617427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-617427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:31:57.589496  426140 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:31:57.589586  426140 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:31:57.653683  426140 cri.go:89] found id: "639ae66155a42496ae026ae909a35634c613909c8f456458e05a2009d07d3c0b"
	I1213 10:31:57.653717  426140 cri.go:89] found id: "baf35aa1d2b45bb5525d0dccfb256a0f277a28a168309f93d5f7aeb22cc81f6d"
	I1213 10:31:57.653726  426140 cri.go:89] found id: "2d40453aecc37eb4c347838c490725ea24158d53b71297d2e29fe1da49bde772"
	I1213 10:31:57.653733  426140 cri.go:89] found id: "e603073854d97f3ddecca2d17efe16393662b7100e35615ec789b4fad68d34c0"
	I1213 10:31:57.653738  426140 cri.go:89] found id: "a6b1341425c3d71feefd8ec6f79adfb3469bc8a278b52ea781100bd84163812b"
	I1213 10:31:57.653743  426140 cri.go:89] found id: "9a7abb1d8a7886994e32b3aa425c66d0714ffa12c240b84062a9380b6eda01e8"
	I1213 10:31:57.653749  426140 cri.go:89] found id: "d7b01c52d4399f83e18c4618beb6bfcddb5f8b44399baddfbd157ec084d0af2a"
	I1213 10:31:57.653754  426140 cri.go:89] found id: "9fc363dfc1677e780777d8fd668049090291f9c8df98005e70ca057d4426bee3"
	I1213 10:31:57.653759  426140 cri.go:89] found id: "cc1e155bce7668ffab50d171331c403ec5ef4e4040b2859aa7e6118538759f17"
	I1213 10:31:57.653771  426140 cri.go:89] found id: "996fb44e19e5bd99048f5bf2c9d7645dbbae6bee523f14a559d0969f8116343a"
	I1213 10:31:57.653778  426140 cri.go:89] found id: "cd937860d27ae398c3b98aca9b9782e0eafa4a97222a3c8a0dc2518f49074358"
	I1213 10:31:57.653783  426140 cri.go:89] found id: ""
	I1213 10:31:57.653845  426140 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-617427 -n pause-617427
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-617427 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-617427 logs -n 25: (1.55724777s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-248819 sudo systemctl status kubelet --all --full --no-pager         │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo systemctl cat kubelet --no-pager                         │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo journalctl -xeu kubelet --all --full --no-pager          │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo cat /etc/kubernetes/kubelet.conf                         │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo cat /var/lib/kubelet/config.yaml                         │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo systemctl status docker --all --full --no-pager          │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo systemctl cat docker --no-pager                          │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo cat /etc/docker/daemon.json                              │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo docker system info                                       │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo systemctl status cri-docker --all --full --no-pager      │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo systemctl cat cri-docker --no-pager                      │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo cat /usr/lib/systemd/system/cri-docker.service           │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo cri-dockerd --version                                    │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo systemctl status containerd --all --full --no-pager      │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo systemctl cat containerd --no-pager                      │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo cat /lib/systemd/system/containerd.service               │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo cat /etc/containerd/config.toml                          │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo containerd config dump                                   │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo systemctl status crio --all --full --no-pager            │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo systemctl cat crio --no-pager                            │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo crio config                                              │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ delete  │ -p cilium-248819                                                               │ cilium-248819 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │ 13 Dec 25 10:32 UTC │
	│ start   │ -p guest-964680 --no-kubernetes --driver=kvm2  --container-runtime=crio        │ guest-964680  │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:32:23
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:32:23.105516  428709 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:32:23.105773  428709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:32:23.105777  428709 out.go:374] Setting ErrFile to fd 2...
	I1213 10:32:23.105780  428709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:32:23.106044  428709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 10:32:23.106547  428709 out.go:368] Setting JSON to false
	I1213 10:32:23.107580  428709 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8092,"bootTime":1765613851,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 10:32:23.107635  428709 start.go:143] virtualization: kvm guest
	I1213 10:32:23.109848  428709 out.go:179] * [guest-964680] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 10:32:23.111233  428709 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:32:23.111248  428709 notify.go:221] Checking for updates...
	I1213 10:32:23.113992  428709 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:32:23.115182  428709 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 10:32:23.116373  428709 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	I1213 10:32:18.444109  424744 api_server.go:253] Checking apiserver healthz at https://192.168.72.174:8443/healthz ...
	I1213 10:32:18.444774  424744 api_server.go:269] stopped: https://192.168.72.174:8443/healthz: Get "https://192.168.72.174:8443/healthz": dial tcp 192.168.72.174:8443: connect: connection refused
	I1213 10:32:18.444834  424744 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:32:18.444889  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:32:18.488391  424744 cri.go:89] found id: "89f23bbb20fa65bbf4a3baf0e2957049b356f26d2ad63e230f17e7373570ef2f"
	I1213 10:32:18.488418  424744 cri.go:89] found id: ""
	I1213 10:32:18.488427  424744 logs.go:282] 1 containers: [89f23bbb20fa65bbf4a3baf0e2957049b356f26d2ad63e230f17e7373570ef2f]
	I1213 10:32:18.488474  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:18.493919  424744 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:32:18.494008  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:32:18.536053  424744 cri.go:89] found id: "0e87939746492795b2c9cffeaa0960f79cd26fef93ad2ee13bad7163090179f2"
	I1213 10:32:18.536081  424744 cri.go:89] found id: ""
	I1213 10:32:18.536093  424744 logs.go:282] 1 containers: [0e87939746492795b2c9cffeaa0960f79cd26fef93ad2ee13bad7163090179f2]
	I1213 10:32:18.536153  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:18.541917  424744 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:32:18.542005  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:32:18.583725  424744 cri.go:89] found id: ""
	I1213 10:32:18.583760  424744 logs.go:282] 0 containers: []
	W1213 10:32:18.583772  424744 logs.go:284] No container was found matching "coredns"
	I1213 10:32:18.583780  424744 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:32:18.583839  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:32:18.629278  424744 cri.go:89] found id: "d1fa391bd41e19817effdbc921fa9a56f3959cd3079e4a6d02474b14b85521a0"
	I1213 10:32:18.629300  424744 cri.go:89] found id: "c4bfd7d6729ee1cd3c29226bf896073ae01ecb088bbcc4e539adbf50524ada00"
	I1213 10:32:18.629307  424744 cri.go:89] found id: ""
	I1213 10:32:18.629317  424744 logs.go:282] 2 containers: [d1fa391bd41e19817effdbc921fa9a56f3959cd3079e4a6d02474b14b85521a0 c4bfd7d6729ee1cd3c29226bf896073ae01ecb088bbcc4e539adbf50524ada00]
	I1213 10:32:18.629386  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:18.635122  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:18.640766  424744 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:32:18.640850  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:32:18.685924  424744 cri.go:89] found id: ""
	I1213 10:32:18.685954  424744 logs.go:282] 0 containers: []
	W1213 10:32:18.685963  424744 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:32:18.685972  424744 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:32:18.686036  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:32:18.739419  424744 cri.go:89] found id: "908981b2654466c610edc2a4f838527f94a60873e10fc4a4b2a1cb7f7b7e8802"
	I1213 10:32:18.739448  424744 cri.go:89] found id: ""
	I1213 10:32:18.739460  424744 logs.go:282] 1 containers: [908981b2654466c610edc2a4f838527f94a60873e10fc4a4b2a1cb7f7b7e8802]
	I1213 10:32:18.739517  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:18.744581  424744 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:32:18.744664  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:32:18.789811  424744 cri.go:89] found id: ""
	I1213 10:32:18.789834  424744 logs.go:282] 0 containers: []
	W1213 10:32:18.789843  424744 logs.go:284] No container was found matching "kindnet"
	I1213 10:32:18.789848  424744 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 10:32:18.789912  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 10:32:18.846835  424744 cri.go:89] found id: "de752d57f9e4e8e9f1bd0bfc21385294655f223beef697ab444f6770da9bc639"
	I1213 10:32:18.846860  424744 cri.go:89] found id: ""
	I1213 10:32:18.846870  424744 logs.go:282] 1 containers: [de752d57f9e4e8e9f1bd0bfc21385294655f223beef697ab444f6770da9bc639]
	I1213 10:32:18.846935  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:18.851679  424744 logs.go:123] Gathering logs for etcd [0e87939746492795b2c9cffeaa0960f79cd26fef93ad2ee13bad7163090179f2] ...
	I1213 10:32:18.851731  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e87939746492795b2c9cffeaa0960f79cd26fef93ad2ee13bad7163090179f2"
	I1213 10:32:18.906346  424744 logs.go:123] Gathering logs for dmesg ...
	I1213 10:32:18.906385  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:32:18.926056  424744 logs.go:123] Gathering logs for kube-scheduler [d1fa391bd41e19817effdbc921fa9a56f3959cd3079e4a6d02474b14b85521a0] ...
	I1213 10:32:18.926099  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1fa391bd41e19817effdbc921fa9a56f3959cd3079e4a6d02474b14b85521a0"
	I1213 10:32:19.016259  424744 logs.go:123] Gathering logs for kube-scheduler [c4bfd7d6729ee1cd3c29226bf896073ae01ecb088bbcc4e539adbf50524ada00] ...
	I1213 10:32:19.016301  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4bfd7d6729ee1cd3c29226bf896073ae01ecb088bbcc4e539adbf50524ada00"
	I1213 10:32:19.060577  424744 logs.go:123] Gathering logs for kube-controller-manager [908981b2654466c610edc2a4f838527f94a60873e10fc4a4b2a1cb7f7b7e8802] ...
	I1213 10:32:19.060614  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 908981b2654466c610edc2a4f838527f94a60873e10fc4a4b2a1cb7f7b7e8802"
	I1213 10:32:19.107206  424744 logs.go:123] Gathering logs for storage-provisioner [de752d57f9e4e8e9f1bd0bfc21385294655f223beef697ab444f6770da9bc639] ...
	I1213 10:32:19.107258  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de752d57f9e4e8e9f1bd0bfc21385294655f223beef697ab444f6770da9bc639"
	I1213 10:32:19.151668  424744 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:32:19.151701  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:32:19.485778  424744 logs.go:123] Gathering logs for container status ...
	I1213 10:32:19.485811  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:32:19.538073  424744 logs.go:123] Gathering logs for kubelet ...
	I1213 10:32:19.538109  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:32:19.653417  424744 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:32:19.653454  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:32:19.748940  424744 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:32:19.748966  424744 logs.go:123] Gathering logs for kube-apiserver [89f23bbb20fa65bbf4a3baf0e2957049b356f26d2ad63e230f17e7373570ef2f] ...
	I1213 10:32:19.748990  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89f23bbb20fa65bbf4a3baf0e2957049b356f26d2ad63e230f17e7373570ef2f"
	I1213 10:32:22.304927  424744 api_server.go:253] Checking apiserver healthz at https://192.168.72.174:8443/healthz ...
	I1213 10:32:22.305741  424744 api_server.go:269] stopped: https://192.168.72.174:8443/healthz: Get "https://192.168.72.174:8443/healthz": dial tcp 192.168.72.174:8443: connect: connection refused
	I1213 10:32:22.305821  424744 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:32:22.305892  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:32:22.352775  424744 cri.go:89] found id: "89f23bbb20fa65bbf4a3baf0e2957049b356f26d2ad63e230f17e7373570ef2f"
	I1213 10:32:22.352805  424744 cri.go:89] found id: ""
	I1213 10:32:22.352830  424744 logs.go:282] 1 containers: [89f23bbb20fa65bbf4a3baf0e2957049b356f26d2ad63e230f17e7373570ef2f]
	I1213 10:32:22.352892  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:22.357936  424744 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:32:22.358026  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:32:22.404199  424744 cri.go:89] found id: "0e87939746492795b2c9cffeaa0960f79cd26fef93ad2ee13bad7163090179f2"
	I1213 10:32:22.404228  424744 cri.go:89] found id: ""
	I1213 10:32:22.404240  424744 logs.go:282] 1 containers: [0e87939746492795b2c9cffeaa0960f79cd26fef93ad2ee13bad7163090179f2]
	I1213 10:32:22.404315  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:22.409467  424744 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:32:22.409542  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:32:22.457748  424744 cri.go:89] found id: ""
	I1213 10:32:22.457791  424744 logs.go:282] 0 containers: []
	W1213 10:32:22.457802  424744 logs.go:284] No container was found matching "coredns"
	I1213 10:32:22.457809  424744 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:32:22.457862  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:32:22.507060  424744 cri.go:89] found id: "d1fa391bd41e19817effdbc921fa9a56f3959cd3079e4a6d02474b14b85521a0"
	I1213 10:32:22.507089  424744 cri.go:89] found id: "c4bfd7d6729ee1cd3c29226bf896073ae01ecb088bbcc4e539adbf50524ada00"
	I1213 10:32:22.507096  424744 cri.go:89] found id: ""
	I1213 10:32:22.507105  424744 logs.go:282] 2 containers: [d1fa391bd41e19817effdbc921fa9a56f3959cd3079e4a6d02474b14b85521a0 c4bfd7d6729ee1cd3c29226bf896073ae01ecb088bbcc4e539adbf50524ada00]
	I1213 10:32:22.507170  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:22.512886  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:22.518889  424744 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:32:22.518964  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:32:22.559993  424744 cri.go:89] found id: ""
	I1213 10:32:22.560025  424744 logs.go:282] 0 containers: []
	W1213 10:32:22.560035  424744 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:32:22.560043  424744 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:32:22.560109  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:32:22.607965  424744 cri.go:89] found id: "908981b2654466c610edc2a4f838527f94a60873e10fc4a4b2a1cb7f7b7e8802"
	I1213 10:32:22.608002  424744 cri.go:89] found id: ""
	I1213 10:32:22.608014  424744 logs.go:282] 1 containers: [908981b2654466c610edc2a4f838527f94a60873e10fc4a4b2a1cb7f7b7e8802]
	I1213 10:32:22.608085  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:22.613703  424744 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:32:22.613800  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:32:22.661062  424744 cri.go:89] found id: ""
	I1213 10:32:22.661088  424744 logs.go:282] 0 containers: []
	W1213 10:32:22.661099  424744 logs.go:284] No container was found matching "kindnet"
	I1213 10:32:22.661107  424744 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 10:32:22.661165  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 10:32:22.702785  424744 cri.go:89] found id: "de752d57f9e4e8e9f1bd0bfc21385294655f223beef697ab444f6770da9bc639"
	I1213 10:32:22.702813  424744 cri.go:89] found id: ""
	I1213 10:32:22.702824  424744 logs.go:282] 1 containers: [de752d57f9e4e8e9f1bd0bfc21385294655f223beef697ab444f6770da9bc639]
	I1213 10:32:22.702884  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:22.708305  424744 logs.go:123] Gathering logs for kubelet ...
	I1213 10:32:22.708338  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:32:22.840230  424744 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:32:22.840273  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:32:22.933489  424744 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:32:22.933508  424744 logs.go:123] Gathering logs for kube-apiserver [89f23bbb20fa65bbf4a3baf0e2957049b356f26d2ad63e230f17e7373570ef2f] ...
	I1213 10:32:22.933520  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89f23bbb20fa65bbf4a3baf0e2957049b356f26d2ad63e230f17e7373570ef2f"
	I1213 10:32:22.979043  424744 logs.go:123] Gathering logs for kube-scheduler [d1fa391bd41e19817effdbc921fa9a56f3959cd3079e4a6d02474b14b85521a0] ...
	I1213 10:32:22.979095  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1fa391bd41e19817effdbc921fa9a56f3959cd3079e4a6d02474b14b85521a0"
	I1213 10:32:23.071308  424744 logs.go:123] Gathering logs for kube-scheduler [c4bfd7d6729ee1cd3c29226bf896073ae01ecb088bbcc4e539adbf50524ada00] ...
	I1213 10:32:23.071354  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4bfd7d6729ee1cd3c29226bf896073ae01ecb088bbcc4e539adbf50524ada00"
	I1213 10:32:23.120450  428709 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 10:32:23.121628  428709 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:32:23.123969  428709 config.go:182] Loaded profile config "pause-617427": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:32:23.124124  428709 config.go:182] Loaded profile config "running-upgrade-689860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 10:32:23.124235  428709 config.go:182] Loaded profile config "stopped-upgrade-422744": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 10:32:23.124261  428709 start.go:1901] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1213 10:32:23.124380  428709 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:32:23.175696  428709 out.go:179] * Using the kvm2 driver based on user configuration
	I1213 10:32:23.180257  428709 start.go:309] selected driver: kvm2
	I1213 10:32:23.180269  428709 start.go:927] validating driver "kvm2" against <nil>
	I1213 10:32:23.180283  428709 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:32:23.180715  428709 start.go:1901] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1213 10:32:23.180801  428709 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:32:23.181588  428709 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1213 10:32:23.181816  428709 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 10:32:23.181841  428709 cni.go:84] Creating CNI manager for ""
	I1213 10:32:23.181913  428709 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 10:32:23.181919  428709 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 10:32:23.181932  428709 start.go:1901] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1213 10:32:23.181983  428709 start.go:353] cluster config:
	{Name:guest-964680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:guest-964680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:32:23.182160  428709 iso.go:125] acquiring lock: {Name:mk4ce8bfab58620efe86d1c7a68d79ed9c81b6ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:32:23.183869  428709 out.go:179] * Starting minikube without Kubernetes in cluster guest-964680
	
	
	==> CRI-O <==
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.491236490Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765621943491197152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0eee6a3-1e5b-4174-9977-5562cb23752c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.492335755Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3614beef-ff1a-4fb9-a01e-ed5028abe1b8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.492423288Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3614beef-ff1a-4fb9-a01e-ed5028abe1b8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.492806715Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5325cfe9f5561be21720a86c87eee2f1f6ebf65d86a04abb02e40044bcaabd05,PodSandboxId:414c5af7e883ce38fdc32bb1cba14f4770832010e89b12eec233b145e27cc3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765621930081443023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gm4sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfb5038-1b82-4e76-abab-e908bfd657b4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb36fde6e63cd3d8a62fafdbbe84281b8e6389b3fb9be6d7b7e7a14f9da5956d,PodSandboxId:57ab7851cba49c1010c06eaf26909557d01ed29f8b97563c79f252b8100bc7e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765621925301693068,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd40aa68ed66b8319d92da33ea70edf,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7dc47f7676ca1fdf36818722a12bb5c3cac4bc0439eb25d3ceb36c44c82e8f3,PodSandboxId:c1c2196c5138a77d32f29b5389b2197c2e42954e2ee36fde2ac2296521025c74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b
6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765621925289145176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618d903e047425d639e2fa8d2b3adecd,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ade441284ab13ce98dc1c6d753626cf89c92bd91e7bb011c0ad6c98419219de8,PodSandboxId:a2b88aeef100702bc7762bea2672bcea8d0103b171404de1a7511c86412f3e4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e47
5e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765621925316293074,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec59d93ee055fb390175dec9fad66928,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebddbbc15d05a035f3ad7e39d494b583ab6573dd5634e6c4b14a2006e5f34906,PodSandboxId:345062c722672330c0f0b7eebc91843e0b5b81425d487da9d04954dabd2bc876,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765621925341301515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243ed61afb6773c33c581096fc96bd12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873ae37f7600787ee6b7515c1a8e317cc2a522b024e9ae2528b249130bcc4fdb,PodSandboxId:fc408c002b513
070a1afe3d8c88aaca34b4c19fe32bee1b4b9a50da21cb36f7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765621916508610360,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2c4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607fcf8e-d8a3-471d-96f3-e1b24063d251,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ae66155a42496ae026ae909a35634c613909c8f456458e05a2009d07d3c0b,PodSandboxId:414c5af7e883ce38fdc32bb1cba14f4770832010e89b12
eec233b145e27cc3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765621917380298505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gm4sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfb5038-1b82-4e76-abab-e908bfd657b4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baf35aa1d2b45bb5525d0dccfb256a0f277a28a168309f93d5f7aeb22cc81f6d,PodSandboxId:c1c2196c5138a77d32f29b5389b2197c2e42954e2ee36fde2ac2296521025c74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765621916371141105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618d903e047425d639e2fa8d2b3adecd,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d40453aecc37eb4c347838c490725ea24158d53b71297d2e29fe1da49bde772,PodSandboxId:57ab7851cba49c1010c06eaf26909557d01ed29f8b97563c79f252b8100bc7e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765621916286701993,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd40aa68ed66b8319d92da33ea70e
df,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e603073854d97f3ddecca2d17efe16393662b7100e35615ec789b4fad68d34c0,PodSandboxId:345062c722672330c0f0b7eebc91843e0b5b81425d487da9d04954dabd2bc876,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765621916234207324,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617427,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 243ed61afb6773c33c581096fc96bd12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b1341425c3d71feefd8ec6f79adfb3469bc8a278b52ea781100bd84163812b,PodSandboxId:a2b88aeef100702bc7762bea2672bcea8d0103b171404de1a7511c86412f3e4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765621916057102580,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec59d93ee055fb390175dec9fad66928,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b01c52d4399f83e18c4618beb6bfcddb5f8b44399baddfbd157ec084d0af2a,PodSandboxId:e3da12492784dd21d4ffef13a5d2397d2bc129cdf4619eff7eea76ebe0ba8f0c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7a
f1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765621850336831893,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2c4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607fcf8e-d8a3-471d-96f3-e1b24063d251,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3614beef-ff1a-4fb9-a01e-ed5028abe1b8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.544891184Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=02c0669b-dd4b-45a2-aefe-bfdfb56f2be4 name=/runtime.v1.RuntimeService/Version
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.545074877Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=02c0669b-dd4b-45a2-aefe-bfdfb56f2be4 name=/runtime.v1.RuntimeService/Version
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.546453949Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4fe1d776-44d8-471c-876d-359ad4c6688a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.547003658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765621943546976895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4fe1d776-44d8-471c-876d-359ad4c6688a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.549315332Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16b3dc7d-64e5-4dc9-b72e-0397e5a56a68 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.549389074Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16b3dc7d-64e5-4dc9-b72e-0397e5a56a68 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.549752646Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5325cfe9f5561be21720a86c87eee2f1f6ebf65d86a04abb02e40044bcaabd05,PodSandboxId:414c5af7e883ce38fdc32bb1cba14f4770832010e89b12eec233b145e27cc3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765621930081443023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gm4sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfb5038-1b82-4e76-abab-e908bfd657b4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb36fde6e63cd3d8a62fafdbbe84281b8e6389b3fb9be6d7b7e7a14f9da5956d,PodSandboxId:57ab7851cba49c1010c06eaf26909557d01ed29f8b97563c79f252b8100bc7e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765621925301693068,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd40aa68ed66b8319d92da33ea70edf,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7dc47f7676ca1fdf36818722a12bb5c3cac4bc0439eb25d3ceb36c44c82e8f3,PodSandboxId:c1c2196c5138a77d32f29b5389b2197c2e42954e2ee36fde2ac2296521025c74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b
6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765621925289145176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618d903e047425d639e2fa8d2b3adecd,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ade441284ab13ce98dc1c6d753626cf89c92bd91e7bb011c0ad6c98419219de8,PodSandboxId:a2b88aeef100702bc7762bea2672bcea8d0103b171404de1a7511c86412f3e4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e47
5e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765621925316293074,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec59d93ee055fb390175dec9fad66928,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebddbbc15d05a035f3ad7e39d494b583ab6573dd5634e6c4b14a2006e5f34906,PodSandboxId:345062c722672330c0f0b7eebc91843e0b5b81425d487da9d04954dabd2bc876,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765621925341301515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243ed61afb6773c33c581096fc96bd12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873ae37f7600787ee6b7515c1a8e317cc2a522b024e9ae2528b249130bcc4fdb,PodSandboxId:fc408c002b513
070a1afe3d8c88aaca34b4c19fe32bee1b4b9a50da21cb36f7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765621916508610360,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2c4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607fcf8e-d8a3-471d-96f3-e1b24063d251,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ae66155a42496ae026ae909a35634c613909c8f456458e05a2009d07d3c0b,PodSandboxId:414c5af7e883ce38fdc32bb1cba14f4770832010e89b12
eec233b145e27cc3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765621917380298505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gm4sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfb5038-1b82-4e76-abab-e908bfd657b4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baf35aa1d2b45bb5525d0dccfb256a0f277a28a168309f93d5f7aeb22cc81f6d,PodSandboxId:c1c2196c5138a77d32f29b5389b2197c2e42954e2ee36fde2ac2296521025c74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765621916371141105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618d903e047425d639e2fa8d2b3adecd,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d40453aecc37eb4c347838c490725ea24158d53b71297d2e29fe1da49bde772,PodSandboxId:57ab7851cba49c1010c06eaf26909557d01ed29f8b97563c79f252b8100bc7e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765621916286701993,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd40aa68ed66b8319d92da33ea70e
df,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e603073854d97f3ddecca2d17efe16393662b7100e35615ec789b4fad68d34c0,PodSandboxId:345062c722672330c0f0b7eebc91843e0b5b81425d487da9d04954dabd2bc876,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765621916234207324,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617427,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 243ed61afb6773c33c581096fc96bd12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b1341425c3d71feefd8ec6f79adfb3469bc8a278b52ea781100bd84163812b,PodSandboxId:a2b88aeef100702bc7762bea2672bcea8d0103b171404de1a7511c86412f3e4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765621916057102580,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec59d93ee055fb390175dec9fad66928,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b01c52d4399f83e18c4618beb6bfcddb5f8b44399baddfbd157ec084d0af2a,PodSandboxId:e3da12492784dd21d4ffef13a5d2397d2bc129cdf4619eff7eea76ebe0ba8f0c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7a
f1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765621850336831893,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2c4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607fcf8e-d8a3-471d-96f3-e1b24063d251,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16b3dc7d-64e5-4dc9-b72e-0397e5a56a68 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.608573415Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a672f58a-3cdc-42a2-9734-d3c3090c991c name=/runtime.v1.RuntimeService/Version
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.608708883Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a672f58a-3cdc-42a2-9734-d3c3090c991c name=/runtime.v1.RuntimeService/Version
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.610892153Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86c9ccd0-a6f3-4f87-b227-282c6d0bae7d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.611575862Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765621943611539679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86c9ccd0-a6f3-4f87-b227-282c6d0bae7d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.612576487Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a6a25e5-4a37-48d1-a38b-a1d8bebe2983 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.612789167Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a6a25e5-4a37-48d1-a38b-a1d8bebe2983 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.613331281Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5325cfe9f5561be21720a86c87eee2f1f6ebf65d86a04abb02e40044bcaabd05,PodSandboxId:414c5af7e883ce38fdc32bb1cba14f4770832010e89b12eec233b145e27cc3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765621930081443023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gm4sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfb5038-1b82-4e76-abab-e908bfd657b4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb36fde6e63cd3d8a62fafdbbe84281b8e6389b3fb9be6d7b7e7a14f9da5956d,PodSandboxId:57ab7851cba49c1010c06eaf26909557d01ed29f8b97563c79f252b8100bc7e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765621925301693068,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd40aa68ed66b8319d92da33ea70edf,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7dc47f7676ca1fdf36818722a12bb5c3cac4bc0439eb25d3ceb36c44c82e8f3,PodSandboxId:c1c2196c5138a77d32f29b5389b2197c2e42954e2ee36fde2ac2296521025c74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b
6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765621925289145176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618d903e047425d639e2fa8d2b3adecd,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ade441284ab13ce98dc1c6d753626cf89c92bd91e7bb011c0ad6c98419219de8,PodSandboxId:a2b88aeef100702bc7762bea2672bcea8d0103b171404de1a7511c86412f3e4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e47
5e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765621925316293074,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec59d93ee055fb390175dec9fad66928,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebddbbc15d05a035f3ad7e39d494b583ab6573dd5634e6c4b14a2006e5f34906,PodSandboxId:345062c722672330c0f0b7eebc91843e0b5b81425d487da9d04954dabd2bc876,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765621925341301515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243ed61afb6773c33c581096fc96bd12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873ae37f7600787ee6b7515c1a8e317cc2a522b024e9ae2528b249130bcc4fdb,PodSandboxId:fc408c002b513
070a1afe3d8c88aaca34b4c19fe32bee1b4b9a50da21cb36f7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765621916508610360,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2c4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607fcf8e-d8a3-471d-96f3-e1b24063d251,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ae66155a42496ae026ae909a35634c613909c8f456458e05a2009d07d3c0b,PodSandboxId:414c5af7e883ce38fdc32bb1cba14f4770832010e89b12
eec233b145e27cc3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765621917380298505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gm4sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfb5038-1b82-4e76-abab-e908bfd657b4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baf35aa1d2b45bb5525d0dccfb256a0f277a28a168309f93d5f7aeb22cc81f6d,PodSandboxId:c1c2196c5138a77d32f29b5389b2197c2e42954e2ee36fde2ac2296521025c74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765621916371141105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618d903e047425d639e2fa8d2b3adecd,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d40453aecc37eb4c347838c490725ea24158d53b71297d2e29fe1da49bde772,PodSandboxId:57ab7851cba49c1010c06eaf26909557d01ed29f8b97563c79f252b8100bc7e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765621916286701993,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd40aa68ed66b8319d92da33ea70e
df,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e603073854d97f3ddecca2d17efe16393662b7100e35615ec789b4fad68d34c0,PodSandboxId:345062c722672330c0f0b7eebc91843e0b5b81425d487da9d04954dabd2bc876,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765621916234207324,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617427,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 243ed61afb6773c33c581096fc96bd12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b1341425c3d71feefd8ec6f79adfb3469bc8a278b52ea781100bd84163812b,PodSandboxId:a2b88aeef100702bc7762bea2672bcea8d0103b171404de1a7511c86412f3e4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765621916057102580,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec59d93ee055fb390175dec9fad66928,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b01c52d4399f83e18c4618beb6bfcddb5f8b44399baddfbd157ec084d0af2a,PodSandboxId:e3da12492784dd21d4ffef13a5d2397d2bc129cdf4619eff7eea76ebe0ba8f0c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7a
f1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765621850336831893,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2c4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607fcf8e-d8a3-471d-96f3-e1b24063d251,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a6a25e5-4a37-48d1-a38b-a1d8bebe2983 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.661415196Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=42c5348f-b41b-4565-8a9b-316e0fd6022e name=/runtime.v1.RuntimeService/Version
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.661562772Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42c5348f-b41b-4565-8a9b-316e0fd6022e name=/runtime.v1.RuntimeService/Version
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.662883338Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=90202116-d574-4644-90ee-a7f6674957ae name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.663633433Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765621943663601600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=90202116-d574-4644-90ee-a7f6674957ae name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.664697949Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6940e0f5-61a3-48cf-9952-da440abe556e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.664783196Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6940e0f5-61a3-48cf-9952-da440abe556e name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:23 pause-617427 crio[2618]: time="2025-12-13 10:32:23.665347876Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5325cfe9f5561be21720a86c87eee2f1f6ebf65d86a04abb02e40044bcaabd05,PodSandboxId:414c5af7e883ce38fdc32bb1cba14f4770832010e89b12eec233b145e27cc3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765621930081443023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gm4sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfb5038-1b82-4e76-abab-e908bfd657b4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb36fde6e63cd3d8a62fafdbbe84281b8e6389b3fb9be6d7b7e7a14f9da5956d,PodSandboxId:57ab7851cba49c1010c06eaf26909557d01ed29f8b97563c79f252b8100bc7e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765621925301693068,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd40aa68ed66b8319d92da33ea70edf,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7dc47f7676ca1fdf36818722a12bb5c3cac4bc0439eb25d3ceb36c44c82e8f3,PodSandboxId:c1c2196c5138a77d32f29b5389b2197c2e42954e2ee36fde2ac2296521025c74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b
6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765621925289145176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618d903e047425d639e2fa8d2b3adecd,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ade441284ab13ce98dc1c6d753626cf89c92bd91e7bb011c0ad6c98419219de8,PodSandboxId:a2b88aeef100702bc7762bea2672bcea8d0103b171404de1a7511c86412f3e4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e47
5e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765621925316293074,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec59d93ee055fb390175dec9fad66928,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebddbbc15d05a035f3ad7e39d494b583ab6573dd5634e6c4b14a2006e5f34906,PodSandboxId:345062c722672330c0f0b7eebc91843e0b5b81425d487da9d04954dabd2bc876,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765621925341301515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243ed61afb6773c33c581096fc96bd12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873ae37f7600787ee6b7515c1a8e317cc2a522b024e9ae2528b249130bcc4fdb,PodSandboxId:fc408c002b513
070a1afe3d8c88aaca34b4c19fe32bee1b4b9a50da21cb36f7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765621916508610360,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2c4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607fcf8e-d8a3-471d-96f3-e1b24063d251,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ae66155a42496ae026ae909a35634c613909c8f456458e05a2009d07d3c0b,PodSandboxId:414c5af7e883ce38fdc32bb1cba14f4770832010e89b12
eec233b145e27cc3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765621917380298505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gm4sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfb5038-1b82-4e76-abab-e908bfd657b4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baf35aa1d2b45bb5525d0dccfb256a0f277a28a168309f93d5f7aeb22cc81f6d,PodSandboxId:c1c2196c5138a77d32f29b5389b2197c2e42954e2ee36fde2ac2296521025c74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765621916371141105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618d903e047425d639e2fa8d2b3adecd,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d40453aecc37eb4c347838c490725ea24158d53b71297d2e29fe1da49bde772,PodSandboxId:57ab7851cba49c1010c06eaf26909557d01ed29f8b97563c79f252b8100bc7e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765621916286701993,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd40aa68ed66b8319d92da33ea70e
df,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e603073854d97f3ddecca2d17efe16393662b7100e35615ec789b4fad68d34c0,PodSandboxId:345062c722672330c0f0b7eebc91843e0b5b81425d487da9d04954dabd2bc876,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765621916234207324,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617427,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 243ed61afb6773c33c581096fc96bd12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b1341425c3d71feefd8ec6f79adfb3469bc8a278b52ea781100bd84163812b,PodSandboxId:a2b88aeef100702bc7762bea2672bcea8d0103b171404de1a7511c86412f3e4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765621916057102580,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec59d93ee055fb390175dec9fad66928,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b01c52d4399f83e18c4618beb6bfcddb5f8b44399baddfbd157ec084d0af2a,PodSandboxId:e3da12492784dd21d4ffef13a5d2397d2bc129cdf4619eff7eea76ebe0ba8f0c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7a
f1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765621850336831893,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2c4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607fcf8e-d8a3-471d-96f3-e1b24063d251,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6940e0f5-61a3-48cf-9952-da440abe556e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	5325cfe9f5561       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago       Running             coredns                   2                   414c5af7e883c       coredns-66bc5c9577-gm4sm               kube-system
	ebddbbc15d05a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   18 seconds ago       Running             etcd                      2                   345062c722672       etcd-pause-617427                      kube-system
	ade441284ab13       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   18 seconds ago       Running             kube-controller-manager   2                   a2b88aeef1007       kube-controller-manager-pause-617427   kube-system
	bb36fde6e63cd       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   18 seconds ago       Running             kube-scheduler            2                   57ab7851cba49       kube-scheduler-pause-617427            kube-system
	d7dc47f7676ca       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   18 seconds ago       Running             kube-apiserver            2                   c1c2196c5138a       kube-apiserver-pause-617427            kube-system
	639ae66155a42       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   26 seconds ago       Exited              coredns                   1                   414c5af7e883c       coredns-66bc5c9577-gm4sm               kube-system
	873ae37f76007       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   27 seconds ago       Running             kube-proxy                1                   fc408c002b513       kube-proxy-f2c4f                       kube-system
	baf35aa1d2b45       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   27 seconds ago       Exited              kube-apiserver            1                   c1c2196c5138a       kube-apiserver-pause-617427            kube-system
	2d40453aecc37       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   27 seconds ago       Exited              kube-scheduler            1                   57ab7851cba49       kube-scheduler-pause-617427            kube-system
	e603073854d97       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   27 seconds ago       Exited              etcd                      1                   345062c722672       etcd-pause-617427                      kube-system
	a6b1341425c3d       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   27 seconds ago       Exited              kube-controller-manager   1                   a2b88aeef1007       kube-controller-manager-pause-617427   kube-system
	d7b01c52d4399       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   About a minute ago   Exited              kube-proxy                0                   e3da12492784d       kube-proxy-f2c4f                       kube-system
	
	
	==> coredns [5325cfe9f5561be21720a86c87eee2f1f6ebf65d86a04abb02e40044bcaabd05] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50077 - 57561 "HINFO IN 1273366703144615730.5193948991097563550. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.076348536s
	
	
	==> coredns [639ae66155a42496ae026ae909a35634c613909c8f456458e05a2009d07d3c0b] <==
	
	
	==> describe nodes <==
	Name:               pause-617427
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-617427
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
	                    minikube.k8s.io/name=pause-617427
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T10_30_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 10:30:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-617427
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 10:32:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 10:32:09 +0000   Sat, 13 Dec 2025 10:30:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 10:32:09 +0000   Sat, 13 Dec 2025 10:30:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 10:32:09 +0000   Sat, 13 Dec 2025 10:30:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 10:32:09 +0000   Sat, 13 Dec 2025 10:30:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.105
	  Hostname:    pause-617427
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4c40a8d16ee4077959d0f23e318def8
	  System UUID:                d4c40a8d-16ee-4077-959d-0f23e318def8
	  Boot ID:                    aaf509ab-80b3-4c29-9ef3-70c306fba65f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-gm4sm                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     93s
	  kube-system                 etcd-pause-617427                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         98s
	  kube-system                 kube-apiserver-pause-617427             250m (12%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-controller-manager-pause-617427    200m (10%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-proxy-f2c4f                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-scheduler-pause-617427             100m (5%)     0 (0%)      0 (0%)           0 (0%)         99s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 93s                kube-proxy       
	  Normal  Starting                 14s                kube-proxy       
	  Normal  NodeHasSufficientPID     98s                kubelet          Node pause-617427 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  98s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  98s                kubelet          Node pause-617427 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s                kubelet          Node pause-617427 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 98s                kubelet          Starting kubelet.
	  Normal  NodeReady                97s                kubelet          Node pause-617427 status is now: NodeReady
	  Normal  RegisteredNode           94s                node-controller  Node pause-617427 event: Registered Node pause-617427 in Controller
	  Normal  Starting                 19s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node pause-617427 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node pause-617427 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)  kubelet          Node pause-617427 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11s                node-controller  Node pause-617427 event: Registered Node pause-617427 in Controller
	
	
	==> dmesg <==
	[Dec13 10:30] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001745] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002813] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.175185] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085958] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.100906] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.143604] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.507921] kauditd_printk_skb: 18 callbacks suppressed
	[Dec13 10:31] kauditd_printk_skb: 190 callbacks suppressed
	[  +2.993638] kauditd_printk_skb: 319 callbacks suppressed
	[Dec13 10:32] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [e603073854d97f3ddecca2d17efe16393662b7100e35615ec789b4fad68d34c0] <==
	{"level":"warn","ts":"2025-12-13T10:31:59.525548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:31:59.545824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:31:59.575249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:31:59.579289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:31:59.601220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:31:59.615807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:31:59.722505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50682","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T10:32:01.829219Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T10:32:01.829301Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-617427","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.105:2380"],"advertise-client-urls":["https://192.168.50.105:2379"]}
	{"level":"error","ts":"2025-12-13T10:32:01.829430Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T10:32:01.831611Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T10:32:01.831685Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T10:32:01.831740Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d113b8292a777974","current-leader-member-id":"d113b8292a777974"}
	{"level":"info","ts":"2025-12-13T10:32:01.831835Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-13T10:32:01.831899Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-13T10:32:01.831917Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T10:32:01.831975Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T10:32:01.831984Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-13T10:32:01.832074Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.105:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T10:32:01.832114Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.105:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T10:32:01.832124Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.105:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T10:32:01.836146Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.105:2380"}
	{"level":"error","ts":"2025-12-13T10:32:01.836283Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.105:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T10:32:01.836349Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.105:2380"}
	{"level":"info","ts":"2025-12-13T10:32:01.836391Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-617427","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.105:2380"],"advertise-client-urls":["https://192.168.50.105:2379"]}
	
	
	==> etcd [ebddbbc15d05a035f3ad7e39d494b583ab6573dd5634e6c4b14a2006e5f34906] <==
	{"level":"warn","ts":"2025-12-13T10:32:07.752921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.780521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.807873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.832667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.849461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.863397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.878299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.900958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.909655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.926340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.947829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.967814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.000518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.009564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.023212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.056844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.063893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.086129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.106880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.123577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.130513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.142309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.232985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36814","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T10:32:17.425843Z","caller":"traceutil/trace.go:172","msg":"trace[42633227] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"324.988996ms","start":"2025-12-13T10:32:17.100834Z","end":"2025-12-13T10:32:17.425823Z","steps":["trace[42633227] 'process raft request'  (duration: 324.90086ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T10:32:17.426703Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T10:32:17.100751Z","time spent":"325.18136ms","remote":"127.0.0.1:36016","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5031,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-617427\" mod_revision:412 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-617427\" value_size:4969 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-617427\" > >"}
	
	
	==> kernel <==
	 10:32:24 up 2 min,  0 users,  load average: 1.14, 0.40, 0.15
	Linux pause-617427 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [baf35aa1d2b45bb5525d0dccfb256a0f277a28a168309f93d5f7aeb22cc81f6d] <==
	F1213 10:32:00.495379       1 hooks.go:204] PostStartHook "priority-and-fairness-config-producer" failed: APF bootstrap ensurer timed out waiting for cache sync
	E1213 10:32:00.581689       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="ipallocator-repair-controller"
	I1213 10:32:00.581815       1 repairip.go:214] Shutting down ipallocator-repair-controller
	E1213 10:32:00.583332       1 customresource_discovery_controller.go:297] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	I1213 10:32:00.583396       1 customresource_discovery_controller.go:298] Shutting down DiscoveryController
	I1213 10:32:00.583451       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I1213 10:32:00.583553       1 controller.go:84] Shutting down OpenAPI AggregationController
	I1213 10:32:00.583625       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	E1213 10:32:00.583997       1 system_namespaces_controller.go:69] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	E1213 10:32:00.584114       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for APIServiceRegistrationController controller" logger="UnhandledError"
	I1213 10:32:00.584155       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E1213 10:32:00.584200       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="kubernetes-service-cidr-controller"
	I1213 10:32:00.584264       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1213 10:32:00.584308       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1213 10:32:00.584356       1 controller.go:89] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	E1213 10:32:00.584379       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="configmaps"
	I1213 10:32:00.584397       1 system_namespaces_controller.go:70] Shutting down system namespaces controller
	I1213 10:32:00.584422       1 object_count_tracker.go:141] "StorageObjectCountTracker pruner is exiting"
	E1213 10:32:00.584455       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for RemoteAvailability controller" logger="UnhandledError"
	E1213 10:32:00.584473       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for LocalAvailability controller" logger="UnhandledError"
	F1213 10:32:00.584492       1 hooks.go:204] PostStartHook "crd-informer-synced" failed: context canceled
	E1213 10:32:00.654142       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="cluster_authentication_trust_controller"
	I1213 10:32:00.654238       1 cluster_authentication_trust_controller.go:467] Shutting down cluster_authentication_trust_controller controller
	E1213 10:32:00.654271       1 gc_controller.go:84] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	E1213 10:32:00.654348       1 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://localhost:8443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-s7wjmkxifsc3xv5rprgdltj6yq\": time to stop HTTP server" interval="200ms"
	
	
	==> kube-apiserver [d7dc47f7676ca1fdf36818722a12bb5c3cac4bc0439eb25d3ceb36c44c82e8f3] <==
	I1213 10:32:09.136173       1 policy_source.go:240] refreshing policies
	I1213 10:32:09.158481       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 10:32:09.159445       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1213 10:32:09.161198       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 10:32:09.161414       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 10:32:09.161782       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1213 10:32:09.163748       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 10:32:09.172272       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 10:32:09.172309       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 10:32:09.174552       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 10:32:09.180342       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 10:32:09.181133       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 10:32:09.181460       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 10:32:09.186345       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 10:32:09.227784       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1213 10:32:09.233290       1 cache.go:39] Caches are synced for autoregister controller
	I1213 10:32:09.780245       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 10:32:09.967094       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 10:32:11.140638       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 10:32:11.275088       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 10:32:11.349198       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 10:32:11.368023       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 10:32:12.583631       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 10:32:12.781677       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 10:32:12.830647       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [a6b1341425c3d71feefd8ec6f79adfb3469bc8a278b52ea781100bd84163812b] <==
	I1213 10:31:57.721003       1 serving.go:386] Generated self-signed cert in-memory
	I1213 10:31:58.448215       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1213 10:31:58.448308       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 10:31:58.450818       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1213 10:31:58.450950       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1213 10:31:58.451260       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1213 10:31:58.451353       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [ade441284ab13ce98dc1c6d753626cf89c92bd91e7bb011c0ad6c98419219de8] <==
	I1213 10:32:12.493460       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 10:32:12.496599       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 10:32:12.497563       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 10:32:12.498191       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 10:32:12.508951       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 10:32:12.515018       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 10:32:12.521651       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 10:32:12.522940       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 10:32:12.525675       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 10:32:12.525818       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 10:32:12.525844       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 10:32:12.525797       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 10:32:12.526460       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1213 10:32:12.528217       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1213 10:32:12.528343       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-617427"
	I1213 10:32:12.528416       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 10:32:12.527681       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 10:32:12.528982       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1213 10:32:12.528928       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1213 10:32:12.533283       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 10:32:12.538550       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1213 10:32:12.542000       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 10:32:12.545209       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 10:32:12.567814       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 10:32:12.574001       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [873ae37f7600787ee6b7515c1a8e317cc2a522b024e9ae2528b249130bcc4fdb] <==
	E1213 10:32:04.667337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-617427&limit=500&resourceVersion=0\": dial tcp 192.168.50.105:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1213 10:32:09.290183       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 10:32:09.290408       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.105"]
	E1213 10:32:09.290607       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 10:32:09.331627       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 10:32:09.331758       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 10:32:09.331807       1 server_linux.go:132] "Using iptables Proxier"
	I1213 10:32:09.346743       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 10:32:09.347248       1 server.go:527] "Version info" version="v1.34.2"
	I1213 10:32:09.347488       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 10:32:09.351927       1 config.go:200] "Starting service config controller"
	I1213 10:32:09.351987       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 10:32:09.352002       1 config.go:106] "Starting endpoint slice config controller"
	I1213 10:32:09.352006       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 10:32:09.352015       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 10:32:09.352018       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 10:32:09.352516       1 config.go:309] "Starting node config controller"
	I1213 10:32:09.352548       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 10:32:09.352554       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 10:32:09.452313       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 10:32:09.452319       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 10:32:09.452332       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [d7b01c52d4399f83e18c4618beb6bfcddb5f8b44399baddfbd157ec084d0af2a] <==
	I1213 10:30:50.711332       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 10:30:50.811900       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 10:30:50.811943       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.105"]
	E1213 10:30:50.812011       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 10:30:50.869919       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 10:30:50.869999       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 10:30:50.870084       1 server_linux.go:132] "Using iptables Proxier"
	I1213 10:30:50.882668       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 10:30:50.883493       1 server.go:527] "Version info" version="v1.34.2"
	I1213 10:30:50.883508       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 10:30:50.891788       1 config.go:200] "Starting service config controller"
	I1213 10:30:50.891819       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 10:30:50.891835       1 config.go:106] "Starting endpoint slice config controller"
	I1213 10:30:50.891838       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 10:30:50.891847       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 10:30:50.891850       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 10:30:50.894211       1 config.go:309] "Starting node config controller"
	I1213 10:30:50.895374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 10:30:50.895606       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 10:30:50.993300       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 10:30:50.993442       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 10:30:50.993552       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2d40453aecc37eb4c347838c490725ea24158d53b71297d2e29fe1da49bde772] <==
	I1213 10:31:58.900615       1 serving.go:386] Generated self-signed cert in-memory
	W1213 10:32:01.675280       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.50.105:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.50.105:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W1213 10:32:01.675349       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 10:32:01.675362       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 10:32:01.692579       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 10:32:01.692608       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1213 10:32:01.692639       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1213 10:32:01.695684       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 10:32:01.695742       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 10:32:01.696159       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1213 10:32:01.696252       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	E1213 10:32:01.696398       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 10:32:01.696413       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 10:32:01.696435       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 10:32:01.696445       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 10:32:01.696521       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 10:32:01.696578       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 10:32:01.696585       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 10:32:01.696604       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bb36fde6e63cd3d8a62fafdbbe84281b8e6389b3fb9be6d7b7e7a14f9da5956d] <==
	I1213 10:32:08.148672       1 serving.go:386] Generated self-signed cert in-memory
	I1213 10:32:09.223744       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 10:32:09.223781       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 10:32:09.231634       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1213 10:32:09.231739       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1213 10:32:09.232145       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 10:32:09.232331       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 10:32:09.232706       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 10:32:09.232827       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 10:32:09.232914       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 10:32:09.232925       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 10:32:09.333357       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 10:32:09.333526       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1213 10:32:09.333698       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 10:32:07 pause-617427 kubelet[3660]: E1213 10:32:07.986897    3660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-617427\" not found" node="pause-617427"
	Dec 13 10:32:07 pause-617427 kubelet[3660]: E1213 10:32:07.987700    3660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-617427\" not found" node="pause-617427"
	Dec 13 10:32:07 pause-617427 kubelet[3660]: E1213 10:32:07.988857    3660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-617427\" not found" node="pause-617427"
	Dec 13 10:32:08 pause-617427 kubelet[3660]: E1213 10:32:08.989587    3660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-617427\" not found" node="pause-617427"
	Dec 13 10:32:08 pause-617427 kubelet[3660]: E1213 10:32:08.991427    3660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-617427\" not found" node="pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.169199    3660 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: E1213 10:32:09.210079    3660 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-617427\" already exists" pod="kube-system/kube-controller-manager-pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.210119    3660 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: E1213 10:32:09.221487    3660 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-617427\" already exists" pod="kube-system/kube-scheduler-pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.221514    3660 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: E1213 10:32:09.234759    3660 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-617427\" already exists" pod="kube-system/etcd-pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.237158    3660 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: E1213 10:32:09.254797    3660 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-617427\" already exists" pod="kube-system/kube-apiserver-pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.256759    3660 kubelet_node_status.go:124] "Node was previously registered" node="pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.256860    3660 kubelet_node_status.go:78] "Successfully registered node" node="pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.256885    3660 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.257911    3660 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.751166    3660 apiserver.go:52] "Watching apiserver"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.769614    3660 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.774950    3660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/607fcf8e-d8a3-471d-96f3-e1b24063d251-xtables-lock\") pod \"kube-proxy-f2c4f\" (UID: \"607fcf8e-d8a3-471d-96f3-e1b24063d251\") " pod="kube-system/kube-proxy-f2c4f"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.775118    3660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/607fcf8e-d8a3-471d-96f3-e1b24063d251-lib-modules\") pod \"kube-proxy-f2c4f\" (UID: \"607fcf8e-d8a3-471d-96f3-e1b24063d251\") " pod="kube-system/kube-proxy-f2c4f"
	Dec 13 10:32:10 pause-617427 kubelet[3660]: I1213 10:32:10.057497    3660 scope.go:117] "RemoveContainer" containerID="639ae66155a42496ae026ae909a35634c613909c8f456458e05a2009d07d3c0b"
	Dec 13 10:32:14 pause-617427 kubelet[3660]: E1213 10:32:14.946923    3660 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765621934945613235 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 13 10:32:14 pause-617427 kubelet[3660]: E1213 10:32:14.947001    3660 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765621934945613235 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 13 10:32:15 pause-617427 kubelet[3660]: I1213 10:32:15.011334    3660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-617427 -n pause-617427
helpers_test.go:270: (dbg) Run:  kubectl --context pause-617427 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-617427 -n pause-617427
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-617427 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-617427 logs -n 25: (1.490117713s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-248819 sudo systemctl cat kubelet --no-pager                                                                                                      │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                       │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo cat /etc/kubernetes/kubelet.conf                                                                                                      │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo cat /var/lib/kubelet/config.yaml                                                                                                      │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo systemctl status docker --all --full --no-pager                                                                                       │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo systemctl cat docker --no-pager                                                                                                       │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo cat /etc/docker/daemon.json                                                                                                           │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo docker system info                                                                                                                    │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo systemctl status cri-docker --all --full --no-pager                                                                                   │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo systemctl cat cri-docker --no-pager                                                                                                   │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                              │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                        │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo cri-dockerd --version                                                                                                                 │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo systemctl status containerd --all --full --no-pager                                                                                   │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo systemctl cat containerd --no-pager                                                                                                   │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo cat /lib/systemd/system/containerd.service                                                                                            │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo cat /etc/containerd/config.toml                                                                                                       │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo containerd config dump                                                                                                                │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo systemctl status crio --all --full --no-pager                                                                                         │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo systemctl cat crio --no-pager                                                                                                         │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                               │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ ssh     │ -p cilium-248819 sudo crio config                                                                                                                           │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ delete  │ -p cilium-248819                                                                                                                                            │ cilium-248819          │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │ 13 Dec 25 10:32 UTC │
	│ start   │ -p guest-964680 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                     │ guest-964680           │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-422744 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-422744 │ jenkins │ v1.37.0 │ 13 Dec 25 10:32 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:32:23
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:32:23.105516  428709 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:32:23.105773  428709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:32:23.105777  428709 out.go:374] Setting ErrFile to fd 2...
	I1213 10:32:23.105780  428709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:32:23.106044  428709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 10:32:23.106547  428709 out.go:368] Setting JSON to false
	I1213 10:32:23.107580  428709 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8092,"bootTime":1765613851,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 10:32:23.107635  428709 start.go:143] virtualization: kvm guest
	I1213 10:32:23.109848  428709 out.go:179] * [guest-964680] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 10:32:23.111233  428709 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:32:23.111248  428709 notify.go:221] Checking for updates...
	I1213 10:32:23.113992  428709 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:32:23.115182  428709 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 10:32:23.116373  428709 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	I1213 10:32:18.444109  424744 api_server.go:253] Checking apiserver healthz at https://192.168.72.174:8443/healthz ...
	I1213 10:32:18.444774  424744 api_server.go:269] stopped: https://192.168.72.174:8443/healthz: Get "https://192.168.72.174:8443/healthz": dial tcp 192.168.72.174:8443: connect: connection refused
	I1213 10:32:18.444834  424744 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:32:18.444889  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:32:18.488391  424744 cri.go:89] found id: "89f23bbb20fa65bbf4a3baf0e2957049b356f26d2ad63e230f17e7373570ef2f"
	I1213 10:32:18.488418  424744 cri.go:89] found id: ""
	I1213 10:32:18.488427  424744 logs.go:282] 1 containers: [89f23bbb20fa65bbf4a3baf0e2957049b356f26d2ad63e230f17e7373570ef2f]
	I1213 10:32:18.488474  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:18.493919  424744 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:32:18.494008  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:32:18.536053  424744 cri.go:89] found id: "0e87939746492795b2c9cffeaa0960f79cd26fef93ad2ee13bad7163090179f2"
	I1213 10:32:18.536081  424744 cri.go:89] found id: ""
	I1213 10:32:18.536093  424744 logs.go:282] 1 containers: [0e87939746492795b2c9cffeaa0960f79cd26fef93ad2ee13bad7163090179f2]
	I1213 10:32:18.536153  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:18.541917  424744 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:32:18.542005  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:32:18.583725  424744 cri.go:89] found id: ""
	I1213 10:32:18.583760  424744 logs.go:282] 0 containers: []
	W1213 10:32:18.583772  424744 logs.go:284] No container was found matching "coredns"
	I1213 10:32:18.583780  424744 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:32:18.583839  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:32:18.629278  424744 cri.go:89] found id: "d1fa391bd41e19817effdbc921fa9a56f3959cd3079e4a6d02474b14b85521a0"
	I1213 10:32:18.629300  424744 cri.go:89] found id: "c4bfd7d6729ee1cd3c29226bf896073ae01ecb088bbcc4e539adbf50524ada00"
	I1213 10:32:18.629307  424744 cri.go:89] found id: ""
	I1213 10:32:18.629317  424744 logs.go:282] 2 containers: [d1fa391bd41e19817effdbc921fa9a56f3959cd3079e4a6d02474b14b85521a0 c4bfd7d6729ee1cd3c29226bf896073ae01ecb088bbcc4e539adbf50524ada00]
	I1213 10:32:18.629386  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:18.635122  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:18.640766  424744 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:32:18.640850  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:32:18.685924  424744 cri.go:89] found id: ""
	I1213 10:32:18.685954  424744 logs.go:282] 0 containers: []
	W1213 10:32:18.685963  424744 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:32:18.685972  424744 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:32:18.686036  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:32:18.739419  424744 cri.go:89] found id: "908981b2654466c610edc2a4f838527f94a60873e10fc4a4b2a1cb7f7b7e8802"
	I1213 10:32:18.739448  424744 cri.go:89] found id: ""
	I1213 10:32:18.739460  424744 logs.go:282] 1 containers: [908981b2654466c610edc2a4f838527f94a60873e10fc4a4b2a1cb7f7b7e8802]
	I1213 10:32:18.739517  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:18.744581  424744 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:32:18.744664  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:32:18.789811  424744 cri.go:89] found id: ""
	I1213 10:32:18.789834  424744 logs.go:282] 0 containers: []
	W1213 10:32:18.789843  424744 logs.go:284] No container was found matching "kindnet"
	I1213 10:32:18.789848  424744 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 10:32:18.789912  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 10:32:18.846835  424744 cri.go:89] found id: "de752d57f9e4e8e9f1bd0bfc21385294655f223beef697ab444f6770da9bc639"
	I1213 10:32:18.846860  424744 cri.go:89] found id: ""
	I1213 10:32:18.846870  424744 logs.go:282] 1 containers: [de752d57f9e4e8e9f1bd0bfc21385294655f223beef697ab444f6770da9bc639]
	I1213 10:32:18.846935  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:18.851679  424744 logs.go:123] Gathering logs for etcd [0e87939746492795b2c9cffeaa0960f79cd26fef93ad2ee13bad7163090179f2] ...
	I1213 10:32:18.851731  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e87939746492795b2c9cffeaa0960f79cd26fef93ad2ee13bad7163090179f2"
	I1213 10:32:18.906346  424744 logs.go:123] Gathering logs for dmesg ...
	I1213 10:32:18.906385  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:32:18.926056  424744 logs.go:123] Gathering logs for kube-scheduler [d1fa391bd41e19817effdbc921fa9a56f3959cd3079e4a6d02474b14b85521a0] ...
	I1213 10:32:18.926099  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1fa391bd41e19817effdbc921fa9a56f3959cd3079e4a6d02474b14b85521a0"
	I1213 10:32:19.016259  424744 logs.go:123] Gathering logs for kube-scheduler [c4bfd7d6729ee1cd3c29226bf896073ae01ecb088bbcc4e539adbf50524ada00] ...
	I1213 10:32:19.016301  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4bfd7d6729ee1cd3c29226bf896073ae01ecb088bbcc4e539adbf50524ada00"
	I1213 10:32:19.060577  424744 logs.go:123] Gathering logs for kube-controller-manager [908981b2654466c610edc2a4f838527f94a60873e10fc4a4b2a1cb7f7b7e8802] ...
	I1213 10:32:19.060614  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 908981b2654466c610edc2a4f838527f94a60873e10fc4a4b2a1cb7f7b7e8802"
	I1213 10:32:19.107206  424744 logs.go:123] Gathering logs for storage-provisioner [de752d57f9e4e8e9f1bd0bfc21385294655f223beef697ab444f6770da9bc639] ...
	I1213 10:32:19.107258  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de752d57f9e4e8e9f1bd0bfc21385294655f223beef697ab444f6770da9bc639"
	I1213 10:32:19.151668  424744 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:32:19.151701  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:32:19.485778  424744 logs.go:123] Gathering logs for container status ...
	I1213 10:32:19.485811  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:32:19.538073  424744 logs.go:123] Gathering logs for kubelet ...
	I1213 10:32:19.538109  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:32:19.653417  424744 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:32:19.653454  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:32:19.748940  424744 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:32:19.748966  424744 logs.go:123] Gathering logs for kube-apiserver [89f23bbb20fa65bbf4a3baf0e2957049b356f26d2ad63e230f17e7373570ef2f] ...
	I1213 10:32:19.748990  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89f23bbb20fa65bbf4a3baf0e2957049b356f26d2ad63e230f17e7373570ef2f"
	I1213 10:32:22.304927  424744 api_server.go:253] Checking apiserver healthz at https://192.168.72.174:8443/healthz ...
	I1213 10:32:22.305741  424744 api_server.go:269] stopped: https://192.168.72.174:8443/healthz: Get "https://192.168.72.174:8443/healthz": dial tcp 192.168.72.174:8443: connect: connection refused
	I1213 10:32:22.305821  424744 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:32:22.305892  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:32:22.352775  424744 cri.go:89] found id: "89f23bbb20fa65bbf4a3baf0e2957049b356f26d2ad63e230f17e7373570ef2f"
	I1213 10:32:22.352805  424744 cri.go:89] found id: ""
	I1213 10:32:22.352830  424744 logs.go:282] 1 containers: [89f23bbb20fa65bbf4a3baf0e2957049b356f26d2ad63e230f17e7373570ef2f]
	I1213 10:32:22.352892  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:22.357936  424744 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:32:22.358026  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:32:22.404199  424744 cri.go:89] found id: "0e87939746492795b2c9cffeaa0960f79cd26fef93ad2ee13bad7163090179f2"
	I1213 10:32:22.404228  424744 cri.go:89] found id: ""
	I1213 10:32:22.404240  424744 logs.go:282] 1 containers: [0e87939746492795b2c9cffeaa0960f79cd26fef93ad2ee13bad7163090179f2]
	I1213 10:32:22.404315  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:22.409467  424744 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:32:22.409542  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:32:22.457748  424744 cri.go:89] found id: ""
	I1213 10:32:22.457791  424744 logs.go:282] 0 containers: []
	W1213 10:32:22.457802  424744 logs.go:284] No container was found matching "coredns"
	I1213 10:32:22.457809  424744 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:32:22.457862  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:32:22.507060  424744 cri.go:89] found id: "d1fa391bd41e19817effdbc921fa9a56f3959cd3079e4a6d02474b14b85521a0"
	I1213 10:32:22.507089  424744 cri.go:89] found id: "c4bfd7d6729ee1cd3c29226bf896073ae01ecb088bbcc4e539adbf50524ada00"
	I1213 10:32:22.507096  424744 cri.go:89] found id: ""
	I1213 10:32:22.507105  424744 logs.go:282] 2 containers: [d1fa391bd41e19817effdbc921fa9a56f3959cd3079e4a6d02474b14b85521a0 c4bfd7d6729ee1cd3c29226bf896073ae01ecb088bbcc4e539adbf50524ada00]
	I1213 10:32:22.507170  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:22.512886  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:22.518889  424744 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:32:22.518964  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:32:22.559993  424744 cri.go:89] found id: ""
	I1213 10:32:22.560025  424744 logs.go:282] 0 containers: []
	W1213 10:32:22.560035  424744 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:32:22.560043  424744 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:32:22.560109  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:32:22.607965  424744 cri.go:89] found id: "908981b2654466c610edc2a4f838527f94a60873e10fc4a4b2a1cb7f7b7e8802"
	I1213 10:32:22.608002  424744 cri.go:89] found id: ""
	I1213 10:32:22.608014  424744 logs.go:282] 1 containers: [908981b2654466c610edc2a4f838527f94a60873e10fc4a4b2a1cb7f7b7e8802]
	I1213 10:32:22.608085  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:22.613703  424744 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:32:22.613800  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:32:22.661062  424744 cri.go:89] found id: ""
	I1213 10:32:22.661088  424744 logs.go:282] 0 containers: []
	W1213 10:32:22.661099  424744 logs.go:284] No container was found matching "kindnet"
	I1213 10:32:22.661107  424744 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 10:32:22.661165  424744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 10:32:22.702785  424744 cri.go:89] found id: "de752d57f9e4e8e9f1bd0bfc21385294655f223beef697ab444f6770da9bc639"
	I1213 10:32:22.702813  424744 cri.go:89] found id: ""
	I1213 10:32:22.702824  424744 logs.go:282] 1 containers: [de752d57f9e4e8e9f1bd0bfc21385294655f223beef697ab444f6770da9bc639]
	I1213 10:32:22.702884  424744 ssh_runner.go:195] Run: which crictl
	I1213 10:32:22.708305  424744 logs.go:123] Gathering logs for kubelet ...
	I1213 10:32:22.708338  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:32:22.840230  424744 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:32:22.840273  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:32:22.933489  424744 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:32:22.933508  424744 logs.go:123] Gathering logs for kube-apiserver [89f23bbb20fa65bbf4a3baf0e2957049b356f26d2ad63e230f17e7373570ef2f] ...
	I1213 10:32:22.933520  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89f23bbb20fa65bbf4a3baf0e2957049b356f26d2ad63e230f17e7373570ef2f"
	I1213 10:32:22.979043  424744 logs.go:123] Gathering logs for kube-scheduler [d1fa391bd41e19817effdbc921fa9a56f3959cd3079e4a6d02474b14b85521a0] ...
	I1213 10:32:22.979095  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1fa391bd41e19817effdbc921fa9a56f3959cd3079e4a6d02474b14b85521a0"
	I1213 10:32:23.071308  424744 logs.go:123] Gathering logs for kube-scheduler [c4bfd7d6729ee1cd3c29226bf896073ae01ecb088bbcc4e539adbf50524ada00] ...
	I1213 10:32:23.071354  424744 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4bfd7d6729ee1cd3c29226bf896073ae01ecb088bbcc4e539adbf50524ada00"
	I1213 10:32:23.120450  428709 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 10:32:23.121628  428709 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:32:23.123969  428709 config.go:182] Loaded profile config "pause-617427": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:32:23.124124  428709 config.go:182] Loaded profile config "running-upgrade-689860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 10:32:23.124235  428709 config.go:182] Loaded profile config "stopped-upgrade-422744": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 10:32:23.124261  428709 start.go:1901] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1213 10:32:23.124380  428709 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:32:23.175696  428709 out.go:179] * Using the kvm2 driver based on user configuration
	I1213 10:32:23.180257  428709 start.go:309] selected driver: kvm2
	I1213 10:32:23.180269  428709 start.go:927] validating driver "kvm2" against <nil>
	I1213 10:32:23.180283  428709 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:32:23.180715  428709 start.go:1901] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1213 10:32:23.180801  428709 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:32:23.181588  428709 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1213 10:32:23.181816  428709 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 10:32:23.181841  428709 cni.go:84] Creating CNI manager for ""
	I1213 10:32:23.181913  428709 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 10:32:23.181919  428709 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 10:32:23.181932  428709 start.go:1901] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1213 10:32:23.181983  428709 start.go:353] cluster config:
	{Name:guest-964680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:guest-964680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:32:23.182160  428709 iso.go:125] acquiring lock: {Name:mk4ce8bfab58620efe86d1c7a68d79ed9c81b6ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:32:23.183869  428709 out.go:179] * Starting minikube without Kubernetes in cluster guest-964680
	I1213 10:32:23.185028  428709 cache.go:59] Skipping Kubernetes image caching due to --no-kubernetes flag
	I1213 10:32:23.185161  428709 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/guest-964680/config.json ...
	I1213 10:32:23.185190  428709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/guest-964680/config.json: {Name:mkc7e2fc2451cefed5fd1fccdb017488a48c00bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:32:23.185436  428709 start.go:360] acquireMachinesLock for guest-964680: {Name:mk911c6c71130df32abbe489ec2f7be251c727ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1213 10:32:23.185489  428709 start.go:364] duration metric: took 34.919µs to acquireMachinesLock for "guest-964680"
	I1213 10:32:23.185513  428709 start.go:93] Provisioning new machine with config: &{Name:guest-964680 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v0.0.0 ClusterName:guest-964680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:32:23.185590  428709 start.go:125] createHost starting for "" (driver="kvm2")
	I1213 10:32:18.429187  426415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:32:18.929824  426415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:32:19.007302  426415 api_server.go:72] duration metric: took 1.078294743s to wait for apiserver process to appear ...
	I1213 10:32:19.007345  426415 api_server.go:88] waiting for apiserver healthz status ...
	I1213 10:32:19.007369  426415 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I1213 10:32:19.007871  426415 api_server.go:269] stopped: https://192.168.39.209:8443/healthz: Get "https://192.168.39.209:8443/healthz": dial tcp 192.168.39.209:8443: connect: connection refused
	I1213 10:32:19.507508  426415 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I1213 10:32:22.102507  426415 api_server.go:279] https://192.168.39.209:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 10:32:22.102539  426415 api_server.go:103] status: https://192.168.39.209:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 10:32:22.102557  426415 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I1213 10:32:22.140138  426415 api_server.go:279] https://192.168.39.209:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 10:32:22.140173  426415 api_server.go:103] status: https://192.168.39.209:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 10:32:22.507422  426415 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I1213 10:32:22.512378  426415 api_server.go:279] https://192.168.39.209:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 10:32:22.512407  426415 api_server.go:103] status: https://192.168.39.209:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 10:32:23.007985  426415 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I1213 10:32:23.014241  426415 api_server.go:279] https://192.168.39.209:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 10:32:23.014273  426415 api_server.go:103] status: https://192.168.39.209:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 10:32:23.507732  426415 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I1213 10:32:23.513424  426415 api_server.go:279] https://192.168.39.209:8443/healthz returned 200:
	ok
	I1213 10:32:23.522642  426415 api_server.go:141] control plane version: v1.32.0
	I1213 10:32:23.522685  426415 api_server.go:131] duration metric: took 4.515331072s to wait for apiserver health ...
	I1213 10:32:23.522698  426415 cni.go:84] Creating CNI manager for ""
	I1213 10:32:23.522707  426415 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 10:32:23.524595  426415 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 10:32:23.525927  426415 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 10:32:23.540990  426415 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 10:32:23.563293  426415 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 10:32:23.569766  426415 system_pods.go:59] 5 kube-system pods found
	I1213 10:32:23.569807  426415 system_pods.go:61] "etcd-stopped-upgrade-422744" [5347f8b9-af58-4eb0-9ba6-d006b2a834a5] Pending
	I1213 10:32:23.569821  426415 system_pods.go:61] "kube-apiserver-stopped-upgrade-422744" [0d4347d0-630f-4ae2-8757-4258791e76e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 10:32:23.569839  426415 system_pods.go:61] "kube-controller-manager-stopped-upgrade-422744" [d39ebab7-5573-4ac2-9e1a-0314760a2c37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:32:23.569851  426415 system_pods.go:61] "kube-scheduler-stopped-upgrade-422744" [0691ee8f-695f-4053-b753-15d9d8fdfd01] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:32:23.569860  426415 system_pods.go:61] "storage-provisioner" [8a7610b2-0d06-4a78-a86e-132d708fc347] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 10:32:23.569873  426415 system_pods.go:74] duration metric: took 6.554628ms to wait for pod list to return data ...
	I1213 10:32:23.569883  426415 node_conditions.go:102] verifying NodePressure condition ...
	I1213 10:32:23.573289  426415 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 10:32:23.573341  426415 node_conditions.go:123] node cpu capacity is 2
	I1213 10:32:23.573360  426415 node_conditions.go:105] duration metric: took 3.470032ms to run NodePressure ...
	I1213 10:32:23.573426  426415 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:32:23.859515  426415 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 10:32:23.874811  426415 ops.go:34] apiserver oom_adj: -16
	I1213 10:32:23.874840  426415 kubeadm.go:602] duration metric: took 7.608534938s to restartPrimaryControlPlane
	I1213 10:32:23.874856  426415 kubeadm.go:403] duration metric: took 7.665172389s to StartCluster
	I1213 10:32:23.874881  426415 settings.go:142] acquiring lock: {Name:mk59569246b81cd6fde64cc849a423eeb59f3563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:32:23.874993  426415 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 10:32:23.876581  426415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/kubeconfig: {Name:mkc4c188214419e87992ca29ee1229c54fdde2b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:32:23.876939  426415 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:32:23.877057  426415 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:32:23.877159  426415 addons.go:70] Setting storage-provisioner=true in profile "stopped-upgrade-422744"
	I1213 10:32:23.877181  426415 config.go:182] Loaded profile config "stopped-upgrade-422744": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 10:32:23.877193  426415 addons.go:70] Setting default-storageclass=true in profile "stopped-upgrade-422744"
	I1213 10:32:23.877184  426415 addons.go:239] Setting addon storage-provisioner=true in "stopped-upgrade-422744"
	W1213 10:32:23.877234  426415 addons.go:248] addon storage-provisioner should already be in state true
	I1213 10:32:23.877234  426415 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-422744"
	I1213 10:32:23.877264  426415 host.go:66] Checking if "stopped-upgrade-422744" exists ...
	I1213 10:32:23.880566  426415 kapi.go:59] client config for stopped-upgrade-422744: &rest.Config{Host:"https://192.168.39.209:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-387918/.minikube/profiles/stopped-upgrade-422744/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-387918/.minikube/profiles/stopped-upgrade-422744/client.key", CAFile:"/home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:32:23.880947  426415 addons.go:239] Setting addon default-storageclass=true in "stopped-upgrade-422744"
	W1213 10:32:23.880969  426415 addons.go:248] addon default-storageclass should already be in state true
	I1213 10:32:23.880994  426415 host.go:66] Checking if "stopped-upgrade-422744" exists ...
	I1213 10:32:23.882424  426415 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:32:23.882424  426415 out.go:179] * Creating mount /home/jenkins:/minikube-host ...
	I1213 10:32:23.882428  426415 out.go:179] * Verifying Kubernetes components...
	I1213 10:32:23.882680  426415 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:32:23.883120  426415 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:32:23.884234  426415 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:32:23.884257  426415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:32:23.884292  426415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:32:23.884645  426415 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/stopped-upgrade-422744/.mount-process: {Name:mk5821f554fd627915124dfcdf88ab4a4de7789b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:32:23.887240  426415 main.go:143] libmachine: domain stopped-upgrade-422744 has defined MAC address 52:54:00:45:f8:c8 in network mk-stopped-upgrade-422744
	I1213 10:32:23.887902  426415 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:f8:c8", ip: ""} in network mk-stopped-upgrade-422744: {Iface:virbr1 ExpiryTime:2025-12-13 11:32:06 +0000 UTC Type:0 Mac:52:54:00:45:f8:c8 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:stopped-upgrade-422744 Clientid:01:52:54:00:45:f8:c8}
	I1213 10:32:23.887939  426415 main.go:143] libmachine: domain stopped-upgrade-422744 has defined IP address 192.168.39.209 and MAC address 52:54:00:45:f8:c8 in network mk-stopped-upgrade-422744
	I1213 10:32:23.888421  426415 main.go:143] libmachine: domain stopped-upgrade-422744 has defined MAC address 52:54:00:45:f8:c8 in network mk-stopped-upgrade-422744
	I1213 10:32:23.888621  426415 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/stopped-upgrade-422744/id_rsa Username:docker}
	I1213 10:32:23.889064  426415 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:f8:c8", ip: ""} in network mk-stopped-upgrade-422744: {Iface:virbr1 ExpiryTime:2025-12-13 11:32:06 +0000 UTC Type:0 Mac:52:54:00:45:f8:c8 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:stopped-upgrade-422744 Clientid:01:52:54:00:45:f8:c8}
	I1213 10:32:23.889096  426415 main.go:143] libmachine: domain stopped-upgrade-422744 has defined IP address 192.168.39.209 and MAC address 52:54:00:45:f8:c8 in network mk-stopped-upgrade-422744
	I1213 10:32:23.889310  426415 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/stopped-upgrade-422744/id_rsa Username:docker}
	I1213 10:32:24.090387  426415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:32:24.108577  426415 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:32:24.108675  426415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:32:24.140994  426415 api_server.go:72] duration metric: took 263.998872ms to wait for apiserver process to appear ...
	I1213 10:32:24.141035  426415 api_server.go:88] waiting for apiserver healthz status ...
	I1213 10:32:24.141082  426415 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I1213 10:32:24.146571  426415 api_server.go:279] https://192.168.39.209:8443/healthz returned 200:
	ok
	I1213 10:32:24.148529  426415 api_server.go:141] control plane version: v1.32.0
	I1213 10:32:24.148554  426415 api_server.go:131] duration metric: took 7.509988ms to wait for apiserver health ...
	I1213 10:32:24.148563  426415 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 10:32:24.153102  426415 system_pods.go:59] 5 kube-system pods found
	I1213 10:32:24.153131  426415 system_pods.go:61] "etcd-stopped-upgrade-422744" [5347f8b9-af58-4eb0-9ba6-d006b2a834a5] Pending
	I1213 10:32:24.153143  426415 system_pods.go:61] "kube-apiserver-stopped-upgrade-422744" [0d4347d0-630f-4ae2-8757-4258791e76e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 10:32:24.153154  426415 system_pods.go:61] "kube-controller-manager-stopped-upgrade-422744" [d39ebab7-5573-4ac2-9e1a-0314760a2c37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 10:32:24.153164  426415 system_pods.go:61] "kube-scheduler-stopped-upgrade-422744" [0691ee8f-695f-4053-b753-15d9d8fdfd01] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 10:32:24.153172  426415 system_pods.go:61] "storage-provisioner" [8a7610b2-0d06-4a78-a86e-132d708fc347] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 10:32:24.153181  426415 system_pods.go:74] duration metric: took 4.611373ms to wait for pod list to return data ...
	I1213 10:32:24.153195  426415 kubeadm.go:587] duration metric: took 276.212899ms to wait for: map[apiserver:true system_pods:true]
	I1213 10:32:24.153210  426415 node_conditions.go:102] verifying NodePressure condition ...
	I1213 10:32:24.156255  426415 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1213 10:32:24.156274  426415 node_conditions.go:123] node cpu capacity is 2
	I1213 10:32:24.156289  426415 node_conditions.go:105] duration metric: took 3.073049ms to run NodePressure ...
	I1213 10:32:24.156305  426415 start.go:242] waiting for startup goroutines ...
	I1213 10:32:24.290908  426415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:32:24.303775  426415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:32:25.271372  426415 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1213 10:32:25.272642  426415 addons.go:530] duration metric: took 1.395601571s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1213 10:32:25.272697  426415 start.go:247] waiting for cluster config update ...
	I1213 10:32:25.272718  426415 start.go:256] writing updated cluster config ...
	I1213 10:32:25.273079  426415 ssh_runner.go:195] Run: rm -f paused
	I1213 10:32:25.333406  426415 start.go:625] kubectl: 1.34.3, cluster: 1.32.0 (minor skew: 2)
	I1213 10:32:25.335156  426415 out.go:203] 
	W1213 10:32:25.336569  426415 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.32.0.
	I1213 10:32:25.338173  426415 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1213 10:32:25.340045  426415 out.go:179] * Done! kubectl is now configured to use "stopped-upgrade-422744" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.651720017Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765621945651686914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ac8989c-8240-4315-b1c2-144004ab28ac name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.653007230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4bbc7be-cc34-4cc2-b337-254716496427 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.653123877Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4bbc7be-cc34-4cc2-b337-254716496427 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.653386594Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5325cfe9f5561be21720a86c87eee2f1f6ebf65d86a04abb02e40044bcaabd05,PodSandboxId:414c5af7e883ce38fdc32bb1cba14f4770832010e89b12eec233b145e27cc3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765621930081443023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gm4sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfb5038-1b82-4e76-abab-e908bfd657b4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb36fde6e63cd3d8a62fafdbbe84281b8e6389b3fb9be6d7b7e7a14f9da5956d,PodSandboxId:57ab7851cba49c1010c06eaf26909557d01ed29f8b97563c79f252b8100bc7e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765621925301693068,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd40aa68ed66b8319d92da33ea70edf,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7dc47f7676ca1fdf36818722a12bb5c3cac4bc0439eb25d3ceb36c44c82e8f3,PodSandboxId:c1c2196c5138a77d32f29b5389b2197c2e42954e2ee36fde2ac2296521025c74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b
6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765621925289145176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618d903e047425d639e2fa8d2b3adecd,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ade441284ab13ce98dc1c6d753626cf89c92bd91e7bb011c0ad6c98419219de8,PodSandboxId:a2b88aeef100702bc7762bea2672bcea8d0103b171404de1a7511c86412f3e4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e47
5e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765621925316293074,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec59d93ee055fb390175dec9fad66928,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebddbbc15d05a035f3ad7e39d494b583ab6573dd5634e6c4b14a2006e5f34906,PodSandboxId:345062c722672330c0f0b7eebc91843e0b5b81425d487da9d04954dabd2bc876,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765621925341301515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243ed61afb6773c33c581096fc96bd12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873ae37f7600787ee6b7515c1a8e317cc2a522b024e9ae2528b249130bcc4fdb,PodSandboxId:fc408c002b513
070a1afe3d8c88aaca34b4c19fe32bee1b4b9a50da21cb36f7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765621916508610360,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2c4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607fcf8e-d8a3-471d-96f3-e1b24063d251,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ae66155a42496ae026ae909a35634c613909c8f456458e05a2009d07d3c0b,PodSandboxId:414c5af7e883ce38fdc32bb1cba14f4770832010e89b12
eec233b145e27cc3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765621917380298505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gm4sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfb5038-1b82-4e76-abab-e908bfd657b4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baf35aa1d2b45bb5525d0dccfb256a0f277a28a168309f93d5f7aeb22cc81f6d,PodSandboxId:c1c2196c5138a77d32f29b5389b2197c2e42954e2ee36fde2ac2296521025c74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765621916371141105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618d903e047425d639e2fa8d2b3adecd,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d40453aecc37eb4c347838c490725ea24158d53b71297d2e29fe1da49bde772,PodSandboxId:57ab7851cba49c1010c06eaf26909557d01ed29f8b97563c79f252b8100bc7e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765621916286701993,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd40aa68ed66b8319d92da33ea70e
df,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e603073854d97f3ddecca2d17efe16393662b7100e35615ec789b4fad68d34c0,PodSandboxId:345062c722672330c0f0b7eebc91843e0b5b81425d487da9d04954dabd2bc876,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765621916234207324,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617427,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 243ed61afb6773c33c581096fc96bd12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b1341425c3d71feefd8ec6f79adfb3469bc8a278b52ea781100bd84163812b,PodSandboxId:a2b88aeef100702bc7762bea2672bcea8d0103b171404de1a7511c86412f3e4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765621916057102580,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec59d93ee055fb390175dec9fad66928,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b01c52d4399f83e18c4618beb6bfcddb5f8b44399baddfbd157ec084d0af2a,PodSandboxId:e3da12492784dd21d4ffef13a5d2397d2bc129cdf4619eff7eea76ebe0ba8f0c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7a
f1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765621850336831893,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2c4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607fcf8e-d8a3-471d-96f3-e1b24063d251,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4bbc7be-cc34-4cc2-b337-254716496427 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.705279716Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f0c2fe3-69b5-4306-aba7-eb5803963111 name=/runtime.v1.RuntimeService/Version
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.705403540Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f0c2fe3-69b5-4306-aba7-eb5803963111 name=/runtime.v1.RuntimeService/Version
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.706712257Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4a110a99-806b-400d-86fb-00ac74467d3b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.707539933Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765621945707510832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a110a99-806b-400d-86fb-00ac74467d3b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.708691225Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e5d98df-c9f8-4a31-9767-137022f002e8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.709008889Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e5d98df-c9f8-4a31-9767-137022f002e8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.709328168Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5325cfe9f5561be21720a86c87eee2f1f6ebf65d86a04abb02e40044bcaabd05,PodSandboxId:414c5af7e883ce38fdc32bb1cba14f4770832010e89b12eec233b145e27cc3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765621930081443023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gm4sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfb5038-1b82-4e76-abab-e908bfd657b4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb36fde6e63cd3d8a62fafdbbe84281b8e6389b3fb9be6d7b7e7a14f9da5956d,PodSandboxId:57ab7851cba49c1010c06eaf26909557d01ed29f8b97563c79f252b8100bc7e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765621925301693068,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd40aa68ed66b8319d92da33ea70edf,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7dc47f7676ca1fdf36818722a12bb5c3cac4bc0439eb25d3ceb36c44c82e8f3,PodSandboxId:c1c2196c5138a77d32f29b5389b2197c2e42954e2ee36fde2ac2296521025c74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b
6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765621925289145176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618d903e047425d639e2fa8d2b3adecd,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ade441284ab13ce98dc1c6d753626cf89c92bd91e7bb011c0ad6c98419219de8,PodSandboxId:a2b88aeef100702bc7762bea2672bcea8d0103b171404de1a7511c86412f3e4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e47
5e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765621925316293074,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec59d93ee055fb390175dec9fad66928,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebddbbc15d05a035f3ad7e39d494b583ab6573dd5634e6c4b14a2006e5f34906,PodSandboxId:345062c722672330c0f0b7eebc91843e0b5b81425d487da9d04954dabd2bc876,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765621925341301515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243ed61afb6773c33c581096fc96bd12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873ae37f7600787ee6b7515c1a8e317cc2a522b024e9ae2528b249130bcc4fdb,PodSandboxId:fc408c002b513
070a1afe3d8c88aaca34b4c19fe32bee1b4b9a50da21cb36f7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765621916508610360,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2c4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607fcf8e-d8a3-471d-96f3-e1b24063d251,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ae66155a42496ae026ae909a35634c613909c8f456458e05a2009d07d3c0b,PodSandboxId:414c5af7e883ce38fdc32bb1cba14f4770832010e89b12
eec233b145e27cc3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765621917380298505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gm4sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfb5038-1b82-4e76-abab-e908bfd657b4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baf35aa1d2b45bb5525d0dccfb256a0f277a28a168309f93d5f7aeb22cc81f6d,PodSandboxId:c1c2196c5138a77d32f29b5389b2197c2e42954e2ee36fde2ac2296521025c74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765621916371141105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618d903e047425d639e2fa8d2b3adecd,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d40453aecc37eb4c347838c490725ea24158d53b71297d2e29fe1da49bde772,PodSandboxId:57ab7851cba49c1010c06eaf26909557d01ed29f8b97563c79f252b8100bc7e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765621916286701993,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd40aa68ed66b8319d92da33ea70e
df,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e603073854d97f3ddecca2d17efe16393662b7100e35615ec789b4fad68d34c0,PodSandboxId:345062c722672330c0f0b7eebc91843e0b5b81425d487da9d04954dabd2bc876,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765621916234207324,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617427,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 243ed61afb6773c33c581096fc96bd12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b1341425c3d71feefd8ec6f79adfb3469bc8a278b52ea781100bd84163812b,PodSandboxId:a2b88aeef100702bc7762bea2672bcea8d0103b171404de1a7511c86412f3e4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765621916057102580,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec59d93ee055fb390175dec9fad66928,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b01c52d4399f83e18c4618beb6bfcddb5f8b44399baddfbd157ec084d0af2a,PodSandboxId:e3da12492784dd21d4ffef13a5d2397d2bc129cdf4619eff7eea76ebe0ba8f0c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7a
f1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765621850336831893,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2c4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607fcf8e-d8a3-471d-96f3-e1b24063d251,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e5d98df-c9f8-4a31-9767-137022f002e8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.754755696Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2474c71c-4c4f-43e2-9ae9-26f5de60b2a9 name=/runtime.v1.RuntimeService/Version
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.755242680Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2474c71c-4c4f-43e2-9ae9-26f5de60b2a9 name=/runtime.v1.RuntimeService/Version
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.756556969Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3baf3a5c-a997-4e44-b6a1-2618fa6bcdbf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.757100258Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765621945757016784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3baf3a5c-a997-4e44-b6a1-2618fa6bcdbf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.758164024Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e65dafd8-fd9b-4161-95ff-480a3ce3a1b1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.758237107Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e65dafd8-fd9b-4161-95ff-480a3ce3a1b1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.758532652Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5325cfe9f5561be21720a86c87eee2f1f6ebf65d86a04abb02e40044bcaabd05,PodSandboxId:414c5af7e883ce38fdc32bb1cba14f4770832010e89b12eec233b145e27cc3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765621930081443023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gm4sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfb5038-1b82-4e76-abab-e908bfd657b4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb36fde6e63cd3d8a62fafdbbe84281b8e6389b3fb9be6d7b7e7a14f9da5956d,PodSandboxId:57ab7851cba49c1010c06eaf26909557d01ed29f8b97563c79f252b8100bc7e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765621925301693068,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd40aa68ed66b8319d92da33ea70edf,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7dc47f7676ca1fdf36818722a12bb5c3cac4bc0439eb25d3ceb36c44c82e8f3,PodSandboxId:c1c2196c5138a77d32f29b5389b2197c2e42954e2ee36fde2ac2296521025c74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b
6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765621925289145176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618d903e047425d639e2fa8d2b3adecd,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ade441284ab13ce98dc1c6d753626cf89c92bd91e7bb011c0ad6c98419219de8,PodSandboxId:a2b88aeef100702bc7762bea2672bcea8d0103b171404de1a7511c86412f3e4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e47
5e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765621925316293074,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec59d93ee055fb390175dec9fad66928,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebddbbc15d05a035f3ad7e39d494b583ab6573dd5634e6c4b14a2006e5f34906,PodSandboxId:345062c722672330c0f0b7eebc91843e0b5b81425d487da9d04954dabd2bc876,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765621925341301515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243ed61afb6773c33c581096fc96bd12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873ae37f7600787ee6b7515c1a8e317cc2a522b024e9ae2528b249130bcc4fdb,PodSandboxId:fc408c002b513
070a1afe3d8c88aaca34b4c19fe32bee1b4b9a50da21cb36f7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765621916508610360,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2c4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607fcf8e-d8a3-471d-96f3-e1b24063d251,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ae66155a42496ae026ae909a35634c613909c8f456458e05a2009d07d3c0b,PodSandboxId:414c5af7e883ce38fdc32bb1cba14f4770832010e89b12
eec233b145e27cc3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765621917380298505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gm4sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfb5038-1b82-4e76-abab-e908bfd657b4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baf35aa1d2b45bb5525d0dccfb256a0f277a28a168309f93d5f7aeb22cc81f6d,PodSandboxId:c1c2196c5138a77d32f29b5389b2197c2e42954e2ee36fde2ac2296521025c74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765621916371141105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618d903e047425d639e2fa8d2b3adecd,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d40453aecc37eb4c347838c490725ea24158d53b71297d2e29fe1da49bde772,PodSandboxId:57ab7851cba49c1010c06eaf26909557d01ed29f8b97563c79f252b8100bc7e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765621916286701993,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd40aa68ed66b8319d92da33ea70e
df,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e603073854d97f3ddecca2d17efe16393662b7100e35615ec789b4fad68d34c0,PodSandboxId:345062c722672330c0f0b7eebc91843e0b5b81425d487da9d04954dabd2bc876,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765621916234207324,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617427,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 243ed61afb6773c33c581096fc96bd12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b1341425c3d71feefd8ec6f79adfb3469bc8a278b52ea781100bd84163812b,PodSandboxId:a2b88aeef100702bc7762bea2672bcea8d0103b171404de1a7511c86412f3e4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765621916057102580,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec59d93ee055fb390175dec9fad66928,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b01c52d4399f83e18c4618beb6bfcddb5f8b44399baddfbd157ec084d0af2a,PodSandboxId:e3da12492784dd21d4ffef13a5d2397d2bc129cdf4619eff7eea76ebe0ba8f0c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7a
f1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765621850336831893,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2c4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607fcf8e-d8a3-471d-96f3-e1b24063d251,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e65dafd8-fd9b-4161-95ff-480a3ce3a1b1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.811156497Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=17e1bca0-cd87-4b80-8201-cbb31b32e667 name=/runtime.v1.RuntimeService/Version
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.811417306Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=17e1bca0-cd87-4b80-8201-cbb31b32e667 name=/runtime.v1.RuntimeService/Version
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.813617496Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=01d7ddb8-1461-436e-8337-d9f83e28325e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.813965849Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765621945813943212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=01d7ddb8-1461-436e-8337-d9f83e28325e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.815109561Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d826cb5-7d89-4d65-a6a7-8b31c9ec7934 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.815199895Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d826cb5-7d89-4d65-a6a7-8b31c9ec7934 name=/runtime.v1.RuntimeService/ListContainers
	Dec 13 10:32:25 pause-617427 crio[2618]: time="2025-12-13 10:32:25.815440716Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5325cfe9f5561be21720a86c87eee2f1f6ebf65d86a04abb02e40044bcaabd05,PodSandboxId:414c5af7e883ce38fdc32bb1cba14f4770832010e89b12eec233b145e27cc3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765621930081443023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gm4sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfb5038-1b82-4e76-abab-e908bfd657b4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb36fde6e63cd3d8a62fafdbbe84281b8e6389b3fb9be6d7b7e7a14f9da5956d,PodSandboxId:57ab7851cba49c1010c06eaf26909557d01ed29f8b97563c79f252b8100bc7e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765621925301693068,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd40aa68ed66b8319d92da33ea70edf,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7dc47f7676ca1fdf36818722a12bb5c3cac4bc0439eb25d3ceb36c44c82e8f3,PodSandboxId:c1c2196c5138a77d32f29b5389b2197c2e42954e2ee36fde2ac2296521025c74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b
6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765621925289145176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618d903e047425d639e2fa8d2b3adecd,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ade441284ab13ce98dc1c6d753626cf89c92bd91e7bb011c0ad6c98419219de8,PodSandboxId:a2b88aeef100702bc7762bea2672bcea8d0103b171404de1a7511c86412f3e4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e47
5e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765621925316293074,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec59d93ee055fb390175dec9fad66928,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebddbbc15d05a035f3ad7e39d494b583ab6573dd5634e6c4b14a2006e5f34906,PodSandboxId:345062c722672330c0f0b7eebc91843e0b5b81425d487da9d04954dabd2bc876,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765621925341301515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243ed61afb6773c33c581096fc96bd12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873ae37f7600787ee6b7515c1a8e317cc2a522b024e9ae2528b249130bcc4fdb,PodSandboxId:fc408c002b513
070a1afe3d8c88aaca34b4c19fe32bee1b4b9a50da21cb36f7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765621916508610360,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2c4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607fcf8e-d8a3-471d-96f3-e1b24063d251,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:639ae66155a42496ae026ae909a35634c613909c8f456458e05a2009d07d3c0b,PodSandboxId:414c5af7e883ce38fdc32bb1cba14f4770832010e89b12
eec233b145e27cc3cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765621917380298505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gm4sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfb5038-1b82-4e76-abab-e908bfd657b4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baf35aa1d2b45bb5525d0dccfb256a0f277a28a168309f93d5f7aeb22cc81f6d,PodSandboxId:c1c2196c5138a77d32f29b5389b2197c2e42954e2ee36fde2ac2296521025c74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765621916371141105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618d903e047425d639e2fa8d2b3adecd,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d40453aecc37eb4c347838c490725ea24158d53b71297d2e29fe1da49bde772,PodSandboxId:57ab7851cba49c1010c06eaf26909557d01ed29f8b97563c79f252b8100bc7e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765621916286701993,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd40aa68ed66b8319d92da33ea70e
df,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e603073854d97f3ddecca2d17efe16393662b7100e35615ec789b4fad68d34c0,PodSandboxId:345062c722672330c0f0b7eebc91843e0b5b81425d487da9d04954dabd2bc876,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765621916234207324,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-617427,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 243ed61afb6773c33c581096fc96bd12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b1341425c3d71feefd8ec6f79adfb3469bc8a278b52ea781100bd84163812b,PodSandboxId:a2b88aeef100702bc7762bea2672bcea8d0103b171404de1a7511c86412f3e4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765621916057102580,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-617427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec59d93ee055fb390175dec9fad66928,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b01c52d4399f83e18c4618beb6bfcddb5f8b44399baddfbd157ec084d0af2a,PodSandboxId:e3da12492784dd21d4ffef13a5d2397d2bc129cdf4619eff7eea76ebe0ba8f0c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7a
f1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765621850336831893,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2c4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607fcf8e-d8a3-471d-96f3-e1b24063d251,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d826cb5-7d89-4d65-a6a7-8b31c9ec7934 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	5325cfe9f5561       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   15 seconds ago       Running             coredns                   2                   414c5af7e883c       coredns-66bc5c9577-gm4sm               kube-system
	ebddbbc15d05a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   20 seconds ago       Running             etcd                      2                   345062c722672       etcd-pause-617427                      kube-system
	ade441284ab13       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   20 seconds ago       Running             kube-controller-manager   2                   a2b88aeef1007       kube-controller-manager-pause-617427   kube-system
	bb36fde6e63cd       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   20 seconds ago       Running             kube-scheduler            2                   57ab7851cba49       kube-scheduler-pause-617427            kube-system
	d7dc47f7676ca       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   20 seconds ago       Running             kube-apiserver            2                   c1c2196c5138a       kube-apiserver-pause-617427            kube-system
	639ae66155a42       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   28 seconds ago       Exited              coredns                   1                   414c5af7e883c       coredns-66bc5c9577-gm4sm               kube-system
	873ae37f76007       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   29 seconds ago       Running             kube-proxy                1                   fc408c002b513       kube-proxy-f2c4f                       kube-system
	baf35aa1d2b45       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   29 seconds ago       Exited              kube-apiserver            1                   c1c2196c5138a       kube-apiserver-pause-617427            kube-system
	2d40453aecc37       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   29 seconds ago       Exited              kube-scheduler            1                   57ab7851cba49       kube-scheduler-pause-617427            kube-system
	e603073854d97       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   29 seconds ago       Exited              etcd                      1                   345062c722672       etcd-pause-617427                      kube-system
	a6b1341425c3d       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   29 seconds ago       Exited              kube-controller-manager   1                   a2b88aeef1007       kube-controller-manager-pause-617427   kube-system
	d7b01c52d4399       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   About a minute ago   Exited              kube-proxy                0                   e3da12492784d       kube-proxy-f2c4f                       kube-system
	
	
	==> coredns [5325cfe9f5561be21720a86c87eee2f1f6ebf65d86a04abb02e40044bcaabd05] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50077 - 57561 "HINFO IN 1273366703144615730.5193948991097563550. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.076348536s
	
	
	==> coredns [639ae66155a42496ae026ae909a35634c613909c8f456458e05a2009d07d3c0b] <==
	
	
	==> describe nodes <==
	Name:               pause-617427
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-617427
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
	                    minikube.k8s.io/name=pause-617427
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T10_30_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 10:30:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-617427
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 10:32:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 10:32:09 +0000   Sat, 13 Dec 2025 10:30:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 10:32:09 +0000   Sat, 13 Dec 2025 10:30:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 10:32:09 +0000   Sat, 13 Dec 2025 10:30:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 10:32:09 +0000   Sat, 13 Dec 2025 10:30:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.105
	  Hostname:    pause-617427
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4c40a8d16ee4077959d0f23e318def8
	  System UUID:                d4c40a8d-16ee-4077-959d-0f23e318def8
	  Boot ID:                    aaf509ab-80b3-4c29-9ef3-70c306fba65f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-gm4sm                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     96s
	  kube-system                 etcd-pause-617427                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         101s
	  kube-system                 kube-apiserver-pause-617427             250m (12%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-controller-manager-pause-617427    200m (10%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-proxy-f2c4f                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-scheduler-pause-617427             100m (5%)     0 (0%)      0 (0%)           0 (0%)         102s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 95s                kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  NodeHasSufficientPID     101s               kubelet          Node pause-617427 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  101s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  101s               kubelet          Node pause-617427 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s               kubelet          Node pause-617427 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 101s               kubelet          Starting kubelet.
	  Normal  NodeReady                100s               kubelet          Node pause-617427 status is now: NodeReady
	  Normal  RegisteredNode           97s                node-controller  Node pause-617427 event: Registered Node pause-617427 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-617427 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-617427 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-617427 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                node-controller  Node pause-617427 event: Registered Node pause-617427 in Controller
	
	
	==> dmesg <==
	[Dec13 10:30] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001745] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002813] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.175185] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085958] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.100906] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.143604] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.507921] kauditd_printk_skb: 18 callbacks suppressed
	[Dec13 10:31] kauditd_printk_skb: 190 callbacks suppressed
	[  +2.993638] kauditd_printk_skb: 319 callbacks suppressed
	[Dec13 10:32] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [e603073854d97f3ddecca2d17efe16393662b7100e35615ec789b4fad68d34c0] <==
	{"level":"warn","ts":"2025-12-13T10:31:59.525548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:31:59.545824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:31:59.575249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:31:59.579289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:31:59.601220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:31:59.615807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:31:59.722505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50682","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T10:32:01.829219Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T10:32:01.829301Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-617427","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.105:2380"],"advertise-client-urls":["https://192.168.50.105:2379"]}
	{"level":"error","ts":"2025-12-13T10:32:01.829430Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T10:32:01.831611Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T10:32:01.831685Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T10:32:01.831740Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d113b8292a777974","current-leader-member-id":"d113b8292a777974"}
	{"level":"info","ts":"2025-12-13T10:32:01.831835Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-13T10:32:01.831899Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-13T10:32:01.831917Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T10:32:01.831975Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T10:32:01.831984Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-13T10:32:01.832074Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.105:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T10:32:01.832114Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.105:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T10:32:01.832124Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.105:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T10:32:01.836146Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.105:2380"}
	{"level":"error","ts":"2025-12-13T10:32:01.836283Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.105:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T10:32:01.836349Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.105:2380"}
	{"level":"info","ts":"2025-12-13T10:32:01.836391Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-617427","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.105:2380"],"advertise-client-urls":["https://192.168.50.105:2379"]}
	
	
	==> etcd [ebddbbc15d05a035f3ad7e39d494b583ab6573dd5634e6c4b14a2006e5f34906] <==
	{"level":"warn","ts":"2025-12-13T10:32:07.752921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.780521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.807873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.832667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.849461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.863397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.878299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.900958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.909655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.926340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.947829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:07.967814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.000518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.009564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.023212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.056844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.063893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.086129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.106880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.123577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.130513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.142309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:32:08.232985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36814","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T10:32:17.425843Z","caller":"traceutil/trace.go:172","msg":"trace[42633227] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"324.988996ms","start":"2025-12-13T10:32:17.100834Z","end":"2025-12-13T10:32:17.425823Z","steps":["trace[42633227] 'process raft request'  (duration: 324.90086ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T10:32:17.426703Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T10:32:17.100751Z","time spent":"325.18136ms","remote":"127.0.0.1:36016","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5031,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-617427\" mod_revision:412 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-617427\" value_size:4969 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-617427\" > >"}
	
	
	==> kernel <==
	 10:32:26 up 2 min,  0 users,  load average: 1.14, 0.40, 0.15
	Linux pause-617427 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [baf35aa1d2b45bb5525d0dccfb256a0f277a28a168309f93d5f7aeb22cc81f6d] <==
	F1213 10:32:00.495379       1 hooks.go:204] PostStartHook "priority-and-fairness-config-producer" failed: APF bootstrap ensurer timed out waiting for cache sync
	E1213 10:32:00.581689       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="ipallocator-repair-controller"
	I1213 10:32:00.581815       1 repairip.go:214] Shutting down ipallocator-repair-controller
	E1213 10:32:00.583332       1 customresource_discovery_controller.go:297] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	I1213 10:32:00.583396       1 customresource_discovery_controller.go:298] Shutting down DiscoveryController
	I1213 10:32:00.583451       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I1213 10:32:00.583553       1 controller.go:84] Shutting down OpenAPI AggregationController
	I1213 10:32:00.583625       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	E1213 10:32:00.583997       1 system_namespaces_controller.go:69] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	E1213 10:32:00.584114       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for APIServiceRegistrationController controller" logger="UnhandledError"
	I1213 10:32:00.584155       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E1213 10:32:00.584200       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="kubernetes-service-cidr-controller"
	I1213 10:32:00.584264       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1213 10:32:00.584308       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1213 10:32:00.584356       1 controller.go:89] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	E1213 10:32:00.584379       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="configmaps"
	I1213 10:32:00.584397       1 system_namespaces_controller.go:70] Shutting down system namespaces controller
	I1213 10:32:00.584422       1 object_count_tracker.go:141] "StorageObjectCountTracker pruner is exiting"
	E1213 10:32:00.584455       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for RemoteAvailability controller" logger="UnhandledError"
	E1213 10:32:00.584473       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for LocalAvailability controller" logger="UnhandledError"
	F1213 10:32:00.584492       1 hooks.go:204] PostStartHook "crd-informer-synced" failed: context canceled
	E1213 10:32:00.654142       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="cluster_authentication_trust_controller"
	I1213 10:32:00.654238       1 cluster_authentication_trust_controller.go:467] Shutting down cluster_authentication_trust_controller controller
	E1213 10:32:00.654271       1 gc_controller.go:84] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	E1213 10:32:00.654348       1 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://localhost:8443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-s7wjmkxifsc3xv5rprgdltj6yq\": time to stop HTTP server" interval="200ms"
	
	
	==> kube-apiserver [d7dc47f7676ca1fdf36818722a12bb5c3cac4bc0439eb25d3ceb36c44c82e8f3] <==
	I1213 10:32:09.136173       1 policy_source.go:240] refreshing policies
	I1213 10:32:09.158481       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 10:32:09.159445       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1213 10:32:09.161198       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 10:32:09.161414       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 10:32:09.161782       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1213 10:32:09.163748       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 10:32:09.172272       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 10:32:09.172309       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 10:32:09.174552       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 10:32:09.180342       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 10:32:09.181133       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 10:32:09.181460       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 10:32:09.186345       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 10:32:09.227784       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1213 10:32:09.233290       1 cache.go:39] Caches are synced for autoregister controller
	I1213 10:32:09.780245       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 10:32:09.967094       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 10:32:11.140638       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 10:32:11.275088       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 10:32:11.349198       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 10:32:11.368023       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 10:32:12.583631       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 10:32:12.781677       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 10:32:12.830647       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [a6b1341425c3d71feefd8ec6f79adfb3469bc8a278b52ea781100bd84163812b] <==
	I1213 10:31:57.721003       1 serving.go:386] Generated self-signed cert in-memory
	I1213 10:31:58.448215       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1213 10:31:58.448308       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 10:31:58.450818       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1213 10:31:58.450950       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1213 10:31:58.451260       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1213 10:31:58.451353       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [ade441284ab13ce98dc1c6d753626cf89c92bd91e7bb011c0ad6c98419219de8] <==
	I1213 10:32:12.493460       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 10:32:12.496599       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 10:32:12.497563       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 10:32:12.498191       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 10:32:12.508951       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 10:32:12.515018       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 10:32:12.521651       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 10:32:12.522940       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 10:32:12.525675       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 10:32:12.525818       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 10:32:12.525844       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 10:32:12.525797       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 10:32:12.526460       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1213 10:32:12.528217       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1213 10:32:12.528343       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-617427"
	I1213 10:32:12.528416       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 10:32:12.527681       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 10:32:12.528982       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1213 10:32:12.528928       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1213 10:32:12.533283       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 10:32:12.538550       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1213 10:32:12.542000       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 10:32:12.545209       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 10:32:12.567814       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 10:32:12.574001       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [873ae37f7600787ee6b7515c1a8e317cc2a522b024e9ae2528b249130bcc4fdb] <==
	E1213 10:32:04.667337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-617427&limit=500&resourceVersion=0\": dial tcp 192.168.50.105:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1213 10:32:09.290183       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 10:32:09.290408       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.105"]
	E1213 10:32:09.290607       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 10:32:09.331627       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 10:32:09.331758       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 10:32:09.331807       1 server_linux.go:132] "Using iptables Proxier"
	I1213 10:32:09.346743       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 10:32:09.347248       1 server.go:527] "Version info" version="v1.34.2"
	I1213 10:32:09.347488       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 10:32:09.351927       1 config.go:200] "Starting service config controller"
	I1213 10:32:09.351987       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 10:32:09.352002       1 config.go:106] "Starting endpoint slice config controller"
	I1213 10:32:09.352006       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 10:32:09.352015       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 10:32:09.352018       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 10:32:09.352516       1 config.go:309] "Starting node config controller"
	I1213 10:32:09.352548       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 10:32:09.352554       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 10:32:09.452313       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 10:32:09.452319       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 10:32:09.452332       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [d7b01c52d4399f83e18c4618beb6bfcddb5f8b44399baddfbd157ec084d0af2a] <==
	I1213 10:30:50.711332       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 10:30:50.811900       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 10:30:50.811943       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.105"]
	E1213 10:30:50.812011       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 10:30:50.869919       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1213 10:30:50.869999       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1213 10:30:50.870084       1 server_linux.go:132] "Using iptables Proxier"
	I1213 10:30:50.882668       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 10:30:50.883493       1 server.go:527] "Version info" version="v1.34.2"
	I1213 10:30:50.883508       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 10:30:50.891788       1 config.go:200] "Starting service config controller"
	I1213 10:30:50.891819       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 10:30:50.891835       1 config.go:106] "Starting endpoint slice config controller"
	I1213 10:30:50.891838       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 10:30:50.891847       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 10:30:50.891850       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 10:30:50.894211       1 config.go:309] "Starting node config controller"
	I1213 10:30:50.895374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 10:30:50.895606       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 10:30:50.993300       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 10:30:50.993442       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 10:30:50.993552       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2d40453aecc37eb4c347838c490725ea24158d53b71297d2e29fe1da49bde772] <==
	I1213 10:31:58.900615       1 serving.go:386] Generated self-signed cert in-memory
	W1213 10:32:01.675280       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.50.105:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.50.105:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W1213 10:32:01.675349       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 10:32:01.675362       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 10:32:01.692579       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 10:32:01.692608       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1213 10:32:01.692639       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1213 10:32:01.695684       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 10:32:01.695742       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 10:32:01.696159       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1213 10:32:01.696252       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	E1213 10:32:01.696398       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 10:32:01.696413       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 10:32:01.696435       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 10:32:01.696445       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 10:32:01.696521       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 10:32:01.696578       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 10:32:01.696585       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 10:32:01.696604       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bb36fde6e63cd3d8a62fafdbbe84281b8e6389b3fb9be6d7b7e7a14f9da5956d] <==
	I1213 10:32:08.148672       1 serving.go:386] Generated self-signed cert in-memory
	I1213 10:32:09.223744       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 10:32:09.223781       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 10:32:09.231634       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1213 10:32:09.231739       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1213 10:32:09.232145       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 10:32:09.232331       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 10:32:09.232706       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 10:32:09.232827       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 10:32:09.232914       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 10:32:09.232925       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 10:32:09.333357       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 10:32:09.333526       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1213 10:32:09.333698       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 10:32:07 pause-617427 kubelet[3660]: E1213 10:32:07.988857    3660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-617427\" not found" node="pause-617427"
	Dec 13 10:32:08 pause-617427 kubelet[3660]: E1213 10:32:08.989587    3660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-617427\" not found" node="pause-617427"
	Dec 13 10:32:08 pause-617427 kubelet[3660]: E1213 10:32:08.991427    3660 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-617427\" not found" node="pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.169199    3660 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: E1213 10:32:09.210079    3660 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-617427\" already exists" pod="kube-system/kube-controller-manager-pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.210119    3660 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: E1213 10:32:09.221487    3660 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-617427\" already exists" pod="kube-system/kube-scheduler-pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.221514    3660 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: E1213 10:32:09.234759    3660 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-617427\" already exists" pod="kube-system/etcd-pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.237158    3660 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: E1213 10:32:09.254797    3660 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-617427\" already exists" pod="kube-system/kube-apiserver-pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.256759    3660 kubelet_node_status.go:124] "Node was previously registered" node="pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.256860    3660 kubelet_node_status.go:78] "Successfully registered node" node="pause-617427"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.256885    3660 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.257911    3660 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.751166    3660 apiserver.go:52] "Watching apiserver"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.769614    3660 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.774950    3660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/607fcf8e-d8a3-471d-96f3-e1b24063d251-xtables-lock\") pod \"kube-proxy-f2c4f\" (UID: \"607fcf8e-d8a3-471d-96f3-e1b24063d251\") " pod="kube-system/kube-proxy-f2c4f"
	Dec 13 10:32:09 pause-617427 kubelet[3660]: I1213 10:32:09.775118    3660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/607fcf8e-d8a3-471d-96f3-e1b24063d251-lib-modules\") pod \"kube-proxy-f2c4f\" (UID: \"607fcf8e-d8a3-471d-96f3-e1b24063d251\") " pod="kube-system/kube-proxy-f2c4f"
	Dec 13 10:32:10 pause-617427 kubelet[3660]: I1213 10:32:10.057497    3660 scope.go:117] "RemoveContainer" containerID="639ae66155a42496ae026ae909a35634c613909c8f456458e05a2009d07d3c0b"
	Dec 13 10:32:14 pause-617427 kubelet[3660]: E1213 10:32:14.946923    3660 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765621934945613235 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 13 10:32:14 pause-617427 kubelet[3660]: E1213 10:32:14.947001    3660 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765621934945613235 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 13 10:32:15 pause-617427 kubelet[3660]: I1213 10:32:15.011334    3660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 13 10:32:24 pause-617427 kubelet[3660]: E1213 10:32:24.950727    3660 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765621944949323467 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 13 10:32:24 pause-617427 kubelet[3660]: E1213 10:32:24.950764    3660 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765621944949323467 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-617427 -n pause-617427
helpers_test.go:270: (dbg) Run:  kubectl --context pause-617427 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (59.96s)

                                                
                                    

Test pass (365/431)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.57
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.17
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.2/json-events 3.66
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.17
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.48
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.17
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.16
30 TestBinaryMirror 0.66
31 TestOffline 100.56
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 128.99
40 TestAddons/serial/GCPAuth/Namespaces 0.14
41 TestAddons/serial/GCPAuth/FakeCredentials 10.53
44 TestAddons/parallel/Registry 18.01
45 TestAddons/parallel/RegistryCreds 0.69
47 TestAddons/parallel/InspektorGadget 10.73
48 TestAddons/parallel/MetricsServer 7.37
50 TestAddons/parallel/CSI 45.61
51 TestAddons/parallel/Headlamp 21.88
52 TestAddons/parallel/CloudSpanner 5.56
53 TestAddons/parallel/LocalPath 53.64
54 TestAddons/parallel/NvidiaDevicePlugin 6.52
55 TestAddons/parallel/Yakd 11.83
57 TestAddons/StoppedEnableDisable 82.78
58 TestCertOptions 83.74
59 TestCertExpiration 618.1
61 TestForceSystemdFlag 77.33
62 TestForceSystemdEnv 60.57
67 TestErrorSpam/setup 37.41
68 TestErrorSpam/start 0.36
69 TestErrorSpam/status 0.7
70 TestErrorSpam/pause 1.57
71 TestErrorSpam/unpause 1.79
72 TestErrorSpam/stop 5.09
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 79.7
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 62.41
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.08
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.17
84 TestFunctional/serial/CacheCmd/cache/add_local 1.95
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.07
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.56
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 56.12
93 TestFunctional/serial/ComponentHealth 0.08
94 TestFunctional/serial/LogsCmd 1.33
95 TestFunctional/serial/LogsFileCmd 1.32
96 TestFunctional/serial/InvalidService 3.85
98 TestFunctional/parallel/ConfigCmd 0.46
99 TestFunctional/parallel/DashboardCmd 16.63
100 TestFunctional/parallel/DryRun 0.26
101 TestFunctional/parallel/InternationalLanguage 0.28
102 TestFunctional/parallel/StatusCmd 0.81
106 TestFunctional/parallel/ServiceCmdConnect 16.43
107 TestFunctional/parallel/AddonsCmd 0.16
108 TestFunctional/parallel/PersistentVolumeClaim 34.74
110 TestFunctional/parallel/SSHCmd 0.33
111 TestFunctional/parallel/CpCmd 1.23
112 TestFunctional/parallel/MySQL 32.53
113 TestFunctional/parallel/FileSync 0.18
114 TestFunctional/parallel/CertSync 1.09
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
122 TestFunctional/parallel/License 0.42
123 TestFunctional/parallel/ServiceCmd/DeployApp 10.25
124 TestFunctional/parallel/Version/short 0.07
125 TestFunctional/parallel/Version/components 0.51
126 TestFunctional/parallel/ImageCommands/ImageListShort 2.23
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
130 TestFunctional/parallel/ImageCommands/ImageBuild 4.18
131 TestFunctional/parallel/ImageCommands/Setup 1.52
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.56
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
137 TestFunctional/parallel/ProfileCmd/profile_list 0.33
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.92
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
140 TestFunctional/parallel/MountCmd/any-port 24.29
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.71
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.84
143 TestFunctional/parallel/ImageCommands/ImageRemove 2.2
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 11.32
145 TestFunctional/parallel/ServiceCmd/List 0.27
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.41
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
148 TestFunctional/parallel/ServiceCmd/Format 0.3
149 TestFunctional/parallel/ServiceCmd/URL 0.34
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
160 TestFunctional/parallel/MountCmd/specific-port 1.7
161 TestFunctional/parallel/MountCmd/VerifyCleanup 1.54
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 83.38
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 53.16
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.09
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.13
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.92
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.07
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.19
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.55
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.14
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.13
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.13
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.23
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.25
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 3.51
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.47
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.24
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.12
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.74
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.16
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.31
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.11
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.17
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.07
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.07
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.35
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.4
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.42
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.19
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.2
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.19
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.19
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.47
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.71
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.36
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.65
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.36
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.39
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.92
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.08
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.08
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.08
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.52
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.5
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.47
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.74
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.54
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.27
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.1
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 1.2
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 1.2
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 208.81
262 TestMultiControlPlane/serial/DeployApp 7.44
263 TestMultiControlPlane/serial/PingHostFromPods 1.39
264 TestMultiControlPlane/serial/AddWorkerNode 45.59
265 TestMultiControlPlane/serial/NodeLabels 0.07
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.65
267 TestMultiControlPlane/serial/CopyFile 10.84
268 TestMultiControlPlane/serial/StopSecondaryNode 87.65
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.49
270 TestMultiControlPlane/serial/RestartSecondaryNode 44.42
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.82
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 379.09
273 TestMultiControlPlane/serial/DeleteSecondaryNode 18.06
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.54
275 TestMultiControlPlane/serial/StopCluster 243.37
276 TestMultiControlPlane/serial/RestartCluster 85.21
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.53
278 TestMultiControlPlane/serial/AddSecondaryNode 76
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.67
284 TestJSONOutput/start/Command 49
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.73
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.63
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 6.93
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.25
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 76.77
316 TestMountStart/serial/StartWithMountFirst 20.07
317 TestMountStart/serial/VerifyMountFirst 0.31
318 TestMountStart/serial/StartWithMountSecond 19.23
319 TestMountStart/serial/VerifyMountSecond 0.31
320 TestMountStart/serial/DeleteFirst 0.71
321 TestMountStart/serial/VerifyMountPostDelete 0.31
322 TestMountStart/serial/Stop 1.26
323 TestMountStart/serial/RestartStopped 18.26
324 TestMountStart/serial/VerifyMountPostStop 0.33
327 TestMultiNode/serial/FreshStart2Nodes 99.8
328 TestMultiNode/serial/DeployApp2Nodes 6.11
329 TestMultiNode/serial/PingHostFrom2Pods 0.88
330 TestMultiNode/serial/AddNode 40.38
331 TestMultiNode/serial/MultiNodeLabels 0.07
332 TestMultiNode/serial/ProfileList 0.46
333 TestMultiNode/serial/CopyFile 6.12
334 TestMultiNode/serial/StopNode 2.22
335 TestMultiNode/serial/StartAfterStop 37.29
336 TestMultiNode/serial/RestartKeepsNodes 288
337 TestMultiNode/serial/DeleteNode 2.69
338 TestMultiNode/serial/StopMultiNode 169.86
339 TestMultiNode/serial/RestartMultiNode 87.32
340 TestMultiNode/serial/ValidateNameConflict 39.73
347 TestScheduledStopUnix 105.86
351 TestRunningBinaryUpgrade 367.01
353 TestKubernetesUpgrade 160.51
356 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
357 TestNoKubernetes/serial/StartWithK8s 77.49
358 TestNoKubernetes/serial/StartWithStopK8s 26.09
367 TestPause/serial/Start 79.51
368 TestNoKubernetes/serial/Start 34.11
369 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
370 TestNoKubernetes/serial/VerifyK8sNotRunning 0.17
371 TestNoKubernetes/serial/ProfileList 1.32
372 TestNoKubernetes/serial/Stop 1.28
373 TestNoKubernetes/serial/StartNoArgs 19.71
374 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
375 TestStoppedBinaryUpgrade/Setup 0.58
376 TestStoppedBinaryUpgrade/Upgrade 77.15
385 TestNetworkPlugins/group/false 5.58
389 TestISOImage/Setup 22.13
390 TestStoppedBinaryUpgrade/MinikubeLogs 1.3
392 TestISOImage/Binaries/crictl 0.2
393 TestISOImage/Binaries/curl 0.23
394 TestISOImage/Binaries/docker 0.23
395 TestISOImage/Binaries/git 0.2
396 TestISOImage/Binaries/iptables 0.22
397 TestISOImage/Binaries/podman 0.2
398 TestISOImage/Binaries/rsync 0.2
399 TestISOImage/Binaries/socat 0.21
400 TestISOImage/Binaries/wget 0.21
401 TestISOImage/Binaries/VBoxControl 0.22
402 TestISOImage/Binaries/VBoxService 0.21
404 TestStartStop/group/old-k8s-version/serial/FirstStart 90.93
406 TestStartStop/group/no-preload/serial/FirstStart 88.69
408 TestStartStop/group/embed-certs/serial/FirstStart 74.59
409 TestStartStop/group/old-k8s-version/serial/DeployApp 11.34
410 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.06
411 TestStartStop/group/old-k8s-version/serial/Stop 74.51
412 TestStartStop/group/no-preload/serial/DeployApp 11.28
413 TestStartStop/group/embed-certs/serial/DeployApp 11.3
414 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
415 TestStartStop/group/no-preload/serial/Stop 82.39
416 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.94
417 TestStartStop/group/embed-certs/serial/Stop 72.84
418 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
419 TestStartStop/group/old-k8s-version/serial/SecondStart 43.81
420 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
421 TestStartStop/group/no-preload/serial/SecondStart 56.1
422 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
423 TestStartStop/group/embed-certs/serial/SecondStart 59.87
424 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 12.01
425 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
426 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
427 TestStartStop/group/old-k8s-version/serial/Pause 3.53
429 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 53.33
430 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
431 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 9.01
432 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
433 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
434 TestStartStop/group/no-preload/serial/Pause 2.89
435 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
437 TestStartStop/group/newest-cni/serial/FirstStart 41.47
438 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
439 TestStartStop/group/embed-certs/serial/Pause 3.01
440 TestNetworkPlugins/group/auto/Start 91.37
441 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.3
442 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
443 TestStartStop/group/default-k8s-diff-port/serial/Stop 81.13
444 TestStartStop/group/newest-cni/serial/DeployApp 0
445 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.16
446 TestStartStop/group/newest-cni/serial/Stop 7.05
447 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
448 TestStartStop/group/newest-cni/serial/SecondStart 31.83
449 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
450 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
451 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
452 TestStartStop/group/newest-cni/serial/Pause 2.62
453 TestNetworkPlugins/group/kindnet/Start 64.58
454 TestNetworkPlugins/group/auto/KubeletFlags 0.17
455 TestNetworkPlugins/group/auto/NetCatPod 10.23
456 TestNetworkPlugins/group/auto/DNS 0.19
457 TestNetworkPlugins/group/auto/Localhost 0.14
458 TestNetworkPlugins/group/auto/HairPin 0.14
459 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
460 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 47.95
461 TestNetworkPlugins/group/flannel/Start 78.04
462 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
463 TestNetworkPlugins/group/kindnet/KubeletFlags 0.19
464 TestNetworkPlugins/group/kindnet/NetCatPod 12.27
465 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 15.01
466 TestNetworkPlugins/group/kindnet/DNS 0.17
467 TestNetworkPlugins/group/kindnet/Localhost 0.16
468 TestNetworkPlugins/group/kindnet/HairPin 0.13
469 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
470 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
471 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.08
472 TestNetworkPlugins/group/enable-default-cni/Start 81.39
473 TestNetworkPlugins/group/bridge/Start 73.88
474 TestNetworkPlugins/group/flannel/ControllerPod 6.23
475 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
476 TestNetworkPlugins/group/flannel/NetCatPod 10.26
477 TestNetworkPlugins/group/flannel/DNS 0.16
478 TestNetworkPlugins/group/flannel/Localhost 0.14
479 TestNetworkPlugins/group/flannel/HairPin 0.14
480 TestNetworkPlugins/group/custom-flannel/Start 69.72
481 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
482 TestNetworkPlugins/group/bridge/NetCatPod 10.27
483 TestNetworkPlugins/group/calico/Start 74.27
484 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
485 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.59
486 TestNetworkPlugins/group/bridge/DNS 0.19
487 TestNetworkPlugins/group/bridge/Localhost 0.15
488 TestNetworkPlugins/group/bridge/HairPin 0.16
489 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
490 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
491 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
493 TestISOImage/PersistentMounts//data 0.22
494 TestISOImage/PersistentMounts//var/lib/docker 0.2
495 TestISOImage/PersistentMounts//var/lib/cni 0.21
496 TestISOImage/PersistentMounts//var/lib/kubelet 0.21
497 TestISOImage/PersistentMounts//var/lib/minikube 0.22
498 TestISOImage/PersistentMounts//var/lib/toolbox 0.21
499 TestISOImage/PersistentMounts//var/lib/boot2docker 0.2
500 TestISOImage/VersionJSON 0.19
501 TestISOImage/eBPFSupport 0.2
502 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.18
503 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.22
504 TestNetworkPlugins/group/custom-flannel/DNS 0.14
505 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
506 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
507 TestNetworkPlugins/group/calico/ControllerPod 6.01
508 TestNetworkPlugins/group/calico/KubeletFlags 0.17
509 TestNetworkPlugins/group/calico/NetCatPod 11.24
510 TestNetworkPlugins/group/calico/DNS 0.14
511 TestNetworkPlugins/group/calico/Localhost 0.12
512 TestNetworkPlugins/group/calico/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (7.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-850568 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-850568 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.565737712s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1213 09:11:36.878100  391877 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1213 09:11:36.878194  391877 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-387918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-850568
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-850568: exit status 85 (80.497891ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-850568 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-850568 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:11:29
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:11:29.370153  391889 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:11:29.370255  391889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:29.370259  391889 out.go:374] Setting ErrFile to fd 2...
	I1213 09:11:29.370264  391889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:29.370482  391889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	W1213 09:11:29.370618  391889 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22127-387918/.minikube/config/config.json: open /home/jenkins/minikube-integration/22127-387918/.minikube/config/config.json: no such file or directory
	I1213 09:11:29.371088  391889 out.go:368] Setting JSON to true
	I1213 09:11:29.372064  391889 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3238,"bootTime":1765613851,"procs":274,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:11:29.372130  391889 start.go:143] virtualization: kvm guest
	I1213 09:11:29.378411  391889 out.go:99] [download-only-850568] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1213 09:11:29.378629  391889 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22127-387918/.minikube/cache/preloaded-tarball: no such file or directory
	I1213 09:11:29.378730  391889 notify.go:221] Checking for updates...
	I1213 09:11:29.380044  391889 out.go:171] MINIKUBE_LOCATION=22127
	I1213 09:11:29.381565  391889 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:11:29.382847  391889 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 09:11:29.384225  391889 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	I1213 09:11:29.385363  391889 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 09:11:29.388352  391889 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 09:11:29.388662  391889 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:11:29.421028  391889 out.go:99] Using the kvm2 driver based on user configuration
	I1213 09:11:29.421060  391889 start.go:309] selected driver: kvm2
	I1213 09:11:29.421066  391889 start.go:927] validating driver "kvm2" against <nil>
	I1213 09:11:29.421419  391889 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 09:11:29.422108  391889 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1213 09:11:29.422252  391889 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 09:11:29.422275  391889 cni.go:84] Creating CNI manager for ""
	I1213 09:11:29.422339  391889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1213 09:11:29.422345  391889 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 09:11:29.422385  391889 start.go:353] cluster config:
	{Name:download-only-850568 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-850568 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:29.422570  391889 iso.go:125] acquiring lock: {Name:mk4ce8bfab58620efe86d1c7a68d79ed9c81b6ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:11:29.424089  391889 out.go:99] Downloading VM boot image ...
	I1213 09:11:29.424117  391889 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22127-387918/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso
	I1213 09:11:32.825682  391889 out.go:99] Starting "download-only-850568" primary control-plane node in "download-only-850568" cluster
	I1213 09:11:32.825740  391889 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 09:11:32.844091  391889 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1213 09:11:32.844134  391889 cache.go:65] Caching tarball of preloaded images
	I1213 09:11:32.844352  391889 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 09:11:32.846189  391889 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1213 09:11:32.846216  391889 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1213 09:11:32.866858  391889 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1213 09:11:32.866991  391889 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22127-387918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-850568 host does not exist
	  To start a cluster, run: "minikube start -p download-only-850568"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-850568
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-530766 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-530766 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.660779303s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1213 09:11:40.943736  391877 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1213 09:11:40.943771  391877 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-387918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-530766
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-530766: exit status 85 (79.399344ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-850568 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-850568 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p download-only-850568                                                                                                                                                 │ download-only-850568 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -o=json --download-only -p download-only-530766 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-530766 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:11:37
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:11:37.336891  392070 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:11:37.337184  392070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:37.337196  392070 out.go:374] Setting ErrFile to fd 2...
	I1213 09:11:37.337203  392070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:37.337447  392070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 09:11:37.337989  392070 out.go:368] Setting JSON to true
	I1213 09:11:37.339058  392070 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3246,"bootTime":1765613851,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:11:37.339118  392070 start.go:143] virtualization: kvm guest
	I1213 09:11:37.341024  392070 out.go:99] [download-only-530766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:11:37.341233  392070 notify.go:221] Checking for updates...
	I1213 09:11:37.342416  392070 out.go:171] MINIKUBE_LOCATION=22127
	I1213 09:11:37.343756  392070 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:11:37.344954  392070 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 09:11:37.346131  392070 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	I1213 09:11:37.347341  392070 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-530766 host does not exist
	  To start a cluster, run: "minikube start -p download-only-530766"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-530766
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-553660 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-553660 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.474733364s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1213 09:11:44.818810  391877 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1213 09:11:44.818875  391877 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-387918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-553660
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-553660: exit status 85 (76.817288ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-850568 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-850568 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p download-only-850568                                                                                                                                                        │ download-only-850568 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -o=json --download-only -p download-only-530766 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-530766 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p download-only-530766                                                                                                                                                        │ download-only-530766 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -o=json --download-only -p download-only-553660 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-553660 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:11:41
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:11:41.399451  392250 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:11:41.399690  392250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:41.399699  392250 out.go:374] Setting ErrFile to fd 2...
	I1213 09:11:41.399703  392250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:41.399897  392250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 09:11:41.400404  392250 out.go:368] Setting JSON to true
	I1213 09:11:41.401300  392250 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3250,"bootTime":1765613851,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:11:41.401376  392250 start.go:143] virtualization: kvm guest
	I1213 09:11:41.403331  392250 out.go:99] [download-only-553660] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:11:41.403498  392250 notify.go:221] Checking for updates...
	I1213 09:11:41.405100  392250 out.go:171] MINIKUBE_LOCATION=22127
	I1213 09:11:41.406559  392250 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:11:41.407908  392250 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 09:11:41.412025  392250 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	I1213 09:11:41.413425  392250 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-553660 host does not exist
	  To start a cluster, run: "minikube start -p download-only-553660"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-553660
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I1213 09:11:45.682242  391877 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-573687 --alsologtostderr --binary-mirror http://127.0.0.1:35927 --driver=kvm2  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-573687" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-573687
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestOffline (100.56s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-570501 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-570501 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m39.251889784s)
helpers_test.go:176: Cleaning up "offline-crio-570501" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-570501
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-570501: (1.309583918s)
--- PASS: TestOffline (100.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-246361
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-246361: exit status 85 (69.69684ms)

                                                
                                                
-- stdout --
	* Profile "addons-246361" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-246361"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-246361
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-246361: exit status 85 (70.848221ms)

                                                
                                                
-- stdout --
	* Profile "addons-246361" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-246361"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (128.99s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-246361 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-246361 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m8.99469175s)
--- PASS: TestAddons/Setup (128.99s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-246361 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-246361 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-246361 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-246361 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [decad740-d6c4-4453-a6a3-0a9ac1f58430] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [decad740-d6c4-4453-a6a3-0a9ac1f58430] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.005451217s
addons_test.go:696: (dbg) Run:  kubectl --context addons-246361 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-246361 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-246361 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 6.866568ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-4vn9j" [0ffa6230-ba82-4c5a-bfd3-a4c73acdce35] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005358901s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-q8xvn" [6c738182-6c24-4d8e-acc8-25d9eae8cfbd] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004255239s
addons_test.go:394: (dbg) Run:  kubectl --context addons-246361 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-246361 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-246361 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.22859185s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-246361 ip
2025/12/13 09:14:32 [DEBUG] GET http://192.168.39.185:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-246361 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.01s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 7.858558ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-246361
addons_test.go:334: (dbg) Run:  kubectl --context addons-246361 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-246361 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-9tz8w" [f1c07efd-bb7a-4a55-a105-0268b5a3a939] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006902802s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-246361 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-246361 addons disable inspektor-gadget --alsologtostderr -v=1: (5.723206882s)
--- PASS: TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 7.750579ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-pglv5" [ce676a7b-70bb-4524-b292-8a00796b0425] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.011640137s
addons_test.go:465: (dbg) Run:  kubectl --context addons-246361 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-246361 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-246361 addons disable metrics-server --alsologtostderr -v=1: (1.272074413s)
--- PASS: TestAddons/parallel/MetricsServer (7.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1213 09:14:40.432823  391877 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1213 09:14:40.441283  391877 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 09:14:40.441340  391877 kapi.go:107] duration metric: took 8.50404ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 8.541261ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-246361 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-246361 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [ebe4524f-f315-4896-b880-acceb768c1ca] Pending
helpers_test.go:353: "task-pv-pod" [ebe4524f-f315-4896-b880-acceb768c1ca] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.00420264s
addons_test.go:574: (dbg) Run:  kubectl --context addons-246361 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-246361 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-246361 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-246361 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-246361 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-246361 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-246361 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [afb26031-db54-4e98-8ee6-0f5e2530ccb3] Pending
helpers_test.go:353: "task-pv-pod-restore" [afb26031-db54-4e98-8ee6-0f5e2530ccb3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [afb26031-db54-4e98-8ee6-0f5e2530ccb3] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.006261783s
addons_test.go:616: (dbg) Run:  kubectl --context addons-246361 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-246361 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-246361 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-246361 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-246361 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-246361 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.812369783s)
--- PASS: TestAddons/parallel/CSI (45.61s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-246361 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-246361 --alsologtostderr -v=1: (1.128702355s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-dvgst" [e8dd15d1-5251-4402-abbf-8a06d9c54835] Pending
helpers_test.go:353: "headlamp-dfcdc64b-dvgst" [e8dd15d1-5251-4402-abbf-8a06d9c54835] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-dvgst" [e8dd15d1-5251-4402-abbf-8a06d9c54835] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.00651188s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-246361 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-246361 addons disable headlamp --alsologtostderr -v=1: (5.739424173s)
--- PASS: TestAddons/parallel/Headlamp (21.88s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-tjwz8" [9755c401-bb45-4e16-8779-ce8fd9c7c9cc] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003503247s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-246361 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.64s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-246361 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-246361 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-246361 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [0aa76539-b5c7-4dad-87a4-e1de343e2b3b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [0aa76539-b5c7-4dad-87a4-e1de343e2b3b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [0aa76539-b5c7-4dad-87a4-e1de343e2b3b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.01574771s
addons_test.go:969: (dbg) Run:  kubectl --context addons-246361 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-246361 ssh "cat /opt/local-path-provisioner/pvc-b8114b46-aff7-41f0-9a17-c8dadafee4e6_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-246361 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-246361 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-246361 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-246361 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.842618197s)
--- PASS: TestAddons/parallel/LocalPath (53.64s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-ghprj" [64bd87e7-7e06-4465-abb1-e27282853105] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003923807s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-246361 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-jlj7d" [c6674335-1f29-4a39-a03d-dfc2843240fd] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00448773s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-246361 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-246361 addons disable yakd --alsologtostderr -v=1: (5.825558853s)
--- PASS: TestAddons/parallel/Yakd (11.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (82.78s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-246361
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-246361: (1m22.556621964s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-246361
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-246361
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-246361
--- PASS: TestAddons/StoppedEnableDisable (82.78s)

                                                
                                    
x
+
TestCertOptions (83.74s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-527505 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-527505 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m22.377371895s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-527505 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-527505 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-527505 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-527505" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-527505
--- PASS: TestCertOptions (83.74s)

                                                
                                    
x
+
TestCertExpiration (618.1s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-826548 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-826548 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (55.486101219s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-826548 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-826548 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (6m21.495217453s)
helpers_test.go:176: Cleaning up "cert-expiration-826548" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-826548
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-826548: (1.118881747s)
--- PASS: TestCertExpiration (618.10s)

                                                
                                    
x
+
TestForceSystemdFlag (77.33s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-167891 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1213 10:32:37.813756  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-167891 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m16.326389235s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-167891 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-167891" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-167891
--- PASS: TestForceSystemdFlag (77.33s)

                                                
                                    
x
+
TestForceSystemdEnv (60.57s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-703729 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-703729 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (59.731904623s)
helpers_test.go:176: Cleaning up "force-systemd-env-703729" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-703729
--- PASS: TestForceSystemdEnv (60.57s)

                                                
                                    
x
+
TestErrorSpam/setup (37.41s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-427347 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-427347 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-427347 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-427347 --driver=kvm2  --container-runtime=crio: (37.406332956s)
--- PASS: TestErrorSpam/setup (37.41s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-427347 --log_dir /tmp/nospam-427347 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-427347 --log_dir /tmp/nospam-427347 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-427347 --log_dir /tmp/nospam-427347 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-427347 --log_dir /tmp/nospam-427347 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-427347 --log_dir /tmp/nospam-427347 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-427347 --log_dir /tmp/nospam-427347 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-427347 --log_dir /tmp/nospam-427347 pause
E1213 09:18:56.551874  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:18:56.558471  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:18:56.570004  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:18:56.591455  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:18:56.633011  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:18:56.714623  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:18:56.876212  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-427347 --log_dir /tmp/nospam-427347 pause
E1213 09:18:57.197524  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-427347 --log_dir /tmp/nospam-427347 pause
E1213 09:18:57.839752  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestErrorSpam/pause (1.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-427347 --log_dir /tmp/nospam-427347 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-427347 --log_dir /tmp/nospam-427347 unpause
E1213 09:18:59.121853  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-427347 --log_dir /tmp/nospam-427347 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (5.09s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-427347 --log_dir /tmp/nospam-427347 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-427347 --log_dir /tmp/nospam-427347 stop: (1.818696231s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-427347 --log_dir /tmp/nospam-427347 stop
E1213 09:19:01.683727  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-427347 --log_dir /tmp/nospam-427347 stop: (2.030660862s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-427347 --log_dir /tmp/nospam-427347 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-427347 --log_dir /tmp/nospam-427347 stop: (1.243354675s)
--- PASS: TestErrorSpam/stop (5.09s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/test/nested/copy/391877/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.7s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-992282 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1213 09:19:06.806097  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:19:17.048353  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:19:37.530752  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:20:18.493863  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-992282 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m19.702693664s)
--- PASS: TestFunctional/serial/StartWithProxy (79.70s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (62.41s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1213 09:20:24.924120  391877 config.go:182] Loaded profile config "functional-992282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-992282 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-992282 --alsologtostderr -v=8: (1m2.408996553s)
functional_test.go:678: soft start took 1m2.409973036s for "functional-992282" cluster.
I1213 09:21:27.333539  391877 config.go:182] Loaded profile config "functional-992282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (62.41s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-992282 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-992282 cache add registry.k8s.io/pause:3.1: (1.054931953s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-992282 cache add registry.k8s.io/pause:3.3: (1.069515272s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-992282 cache add registry.k8s.io/pause:latest: (1.044616556s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-992282 /tmp/TestFunctionalserialCacheCmdcacheadd_local2052054006/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 cache add minikube-local-cache-test:functional-992282
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-992282 cache add minikube-local-cache-test:functional-992282: (1.5731542s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 cache delete minikube-local-cache-test:functional-992282
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-992282
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992282 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (179.265055ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 kubectl -- --context functional-992282 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-992282 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (56.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-992282 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 09:21:40.418178  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-992282 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (56.116640872s)
functional_test.go:776: restart took 56.116832026s for "functional-992282" cluster.
I1213 09:22:30.984253  391877 config.go:182] Loaded profile config "functional-992282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (56.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-992282 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-992282 logs: (1.331051038s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 logs --file /tmp/TestFunctionalserialLogsFileCmd969112731/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-992282 logs --file /tmp/TestFunctionalserialLogsFileCmd969112731/001/logs.txt: (1.323075977s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.85s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-992282 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-992282
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-992282: exit status 115 (242.639931ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.245:31153 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-992282 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.85s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992282 config get cpus: exit status 14 (74.314051ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992282 config get cpus: exit status 14 (71.658122ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-992282 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-992282 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 398560: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.63s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-992282 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-992282 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (126.726322ms)

                                                
                                                
-- stdout --
	* [functional-992282] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:22:59.150136  398451 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:22:59.150268  398451 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:22:59.150280  398451 out.go:374] Setting ErrFile to fd 2...
	I1213 09:22:59.150286  398451 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:22:59.150518  398451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 09:22:59.150977  398451 out.go:368] Setting JSON to false
	I1213 09:22:59.151910  398451 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3928,"bootTime":1765613851,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:22:59.151986  398451 start.go:143] virtualization: kvm guest
	I1213 09:22:59.154047  398451 out.go:179] * [functional-992282] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:22:59.155351  398451 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 09:22:59.155392  398451 notify.go:221] Checking for updates...
	I1213 09:22:59.157660  398451 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:22:59.158773  398451 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 09:22:59.160218  398451 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	I1213 09:22:59.161409  398451 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:22:59.162595  398451 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:22:59.164430  398451 config.go:182] Loaded profile config "functional-992282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:22:59.164945  398451 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:22:59.198595  398451 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 09:22:59.199871  398451 start.go:309] selected driver: kvm2
	I1213 09:22:59.199889  398451 start.go:927] validating driver "kvm2" against &{Name:functional-992282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-992282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:22:59.200004  398451 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:22:59.202988  398451 out.go:203] 
	W1213 09:22:59.204303  398451 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 09:22:59.205646  398451 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-992282 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-992282 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-992282 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (283.184802ms)

                                                
                                                
-- stdout --
	* [functional-992282] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:22:59.411384  398499 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:22:59.411520  398499 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:22:59.411527  398499 out.go:374] Setting ErrFile to fd 2...
	I1213 09:22:59.411533  398499 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:22:59.412020  398499 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 09:22:59.412669  398499 out.go:368] Setting JSON to false
	I1213 09:22:59.413957  398499 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3928,"bootTime":1765613851,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:22:59.414048  398499 start.go:143] virtualization: kvm guest
	I1213 09:22:59.511402  398499 out.go:179] * [functional-992282] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1213 09:22:59.531913  398499 notify.go:221] Checking for updates...
	I1213 09:22:59.531976  398499 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 09:22:59.567151  398499 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:22:59.569147  398499 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 09:22:59.578137  398499 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	I1213 09:22:59.579556  398499 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:22:59.581074  398499 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:22:59.583006  398499 config.go:182] Loaded profile config "functional-992282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:22:59.583784  398499 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:22:59.616793  398499 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1213 09:22:59.618090  398499 start.go:309] selected driver: kvm2
	I1213 09:22:59.618110  398499 start.go:927] validating driver "kvm2" against &{Name:functional-992282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-992282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:22:59.618244  398499 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:22:59.620533  398499 out.go:203] 
	W1213 09:22:59.621679  398499 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 09:22:59.622955  398499 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (16.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-992282 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-992282 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-q24xq" [803ef2de-2da6-4f1b-92b8-66d8f5e01564] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-q24xq" [803ef2de-2da6-4f1b-92b8-66d8f5e01564] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 16.00399431s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.245:32128
functional_test.go:1680: http://192.168.39.245:32128: success! body:
Request served by hello-node-connect-7d85dfc575-q24xq

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.245:32128
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (16.43s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [c325ed77-5538-4671-85b1-ad3420e62be1] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.187858541s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-992282 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-992282 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-992282 get pvc myclaim -o=json
I1213 09:22:54.941679  391877 retry.go:31] will retry after 2.166704638s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:863d373e-8f3e-43c1-97d5-fd7528ac3fde ResourceVersion:834 Generation:0 CreationTimestamp:2025-12-13 09:22:54 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001e180b0 VolumeMode:0xc001e180c0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-992282 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-992282 apply -f testdata/storage-provisioner/pod.yaml
I1213 09:22:57.539633  391877 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [e7f6463a-fbdf-46d6-818f-699425f451dc] Pending
helpers_test.go:353: "sp-pod" [e7f6463a-fbdf-46d6-818f-699425f451dc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [e7f6463a-fbdf-46d6-818f-699425f451dc] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.009273171s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-992282 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-992282 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-992282 delete -f testdata/storage-provisioner/pod.yaml: (1.180825842s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-992282 apply -f testdata/storage-provisioner/pod.yaml
I1213 09:23:16.074077  391877 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [fa09848b-1b0c-4dbc-a21e-3959afe39a6b] Pending
2025/12/13 09:23:16 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:353: "sp-pod" [fa09848b-1b0c-4dbc-a21e-3959afe39a6b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [fa09848b-1b0c-4dbc-a21e-3959afe39a6b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003889199s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-992282 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.74s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh -n functional-992282 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 cp functional-992282:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd738746913/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh -n functional-992282 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh -n functional-992282 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-992282 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-8wcgp" [4367a458-a43f-47a3-a36f-3af1ffc0fc3e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-8wcgp" [4367a458-a43f-47a3-a36f-3af1ffc0fc3e] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.013541012s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-992282 exec mysql-6bcdcbc558-8wcgp -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-992282 exec mysql-6bcdcbc558-8wcgp -- mysql -ppassword -e "show databases;": exit status 1 (379.548494ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 09:23:05.560692  391877 retry.go:31] will retry after 860.929655ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-992282 exec mysql-6bcdcbc558-8wcgp -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-992282 exec mysql-6bcdcbc558-8wcgp -- mysql -ppassword -e "show databases;": exit status 1 (211.400409ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 09:23:06.633931  391877 retry.go:31] will retry after 976.621736ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-992282 exec mysql-6bcdcbc558-8wcgp -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-992282 exec mysql-6bcdcbc558-8wcgp -- mysql -ppassword -e "show databases;": exit status 1 (394.32967ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 09:23:08.006132  391877 retry.go:31] will retry after 3.283691491s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-992282 exec mysql-6bcdcbc558-8wcgp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.53s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/391877/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh "sudo cat /etc/test/nested/copy/391877/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/391877.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh "sudo cat /etc/ssl/certs/391877.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/391877.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh "sudo cat /usr/share/ca-certificates/391877.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3918772.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh "sudo cat /etc/ssl/certs/3918772.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3918772.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh "sudo cat /usr/share/ca-certificates/3918772.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-992282 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992282 ssh "sudo systemctl is-active docker": exit status 1 (219.740927ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992282 ssh "sudo systemctl is-active containerd": exit status 1 (211.321624ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-992282 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-992282 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-d7lpp" [96d8e547-578f-4af0-9f2f-c04d338b0cc9] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-d7lpp" [96d8e547-578f-4af0-9f2f-c04d338b0cc9] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003275407s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-992282 image ls --format short --alsologtostderr: (2.229336492s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-992282 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-992282
localhost/kicbase/echo-server:functional-992282
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-992282 image ls --format short --alsologtostderr:
I1213 09:23:12.369135  398896 out.go:360] Setting OutFile to fd 1 ...
I1213 09:23:12.369387  398896 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:23:12.369400  398896 out.go:374] Setting ErrFile to fd 2...
I1213 09:23:12.369407  398896 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:23:12.369603  398896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
I1213 09:23:12.370168  398896 config.go:182] Loaded profile config "functional-992282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 09:23:12.370258  398896 config.go:182] Loaded profile config "functional-992282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 09:23:12.372720  398896 ssh_runner.go:195] Run: systemctl --version
I1213 09:23:12.375475  398896 main.go:143] libmachine: domain functional-992282 has defined MAC address 52:54:00:a6:34:9f in network mk-functional-992282
I1213 09:23:12.375957  398896 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:34:9f", ip: ""} in network mk-functional-992282: {Iface:virbr1 ExpiryTime:2025-12-13 10:19:19 +0000 UTC Type:0 Mac:52:54:00:a6:34:9f Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:functional-992282 Clientid:01:52:54:00:a6:34:9f}
I1213 09:23:12.375997  398896 main.go:143] libmachine: domain functional-992282 has defined IP address 192.168.39.245 and MAC address 52:54:00:a6:34:9f in network mk-functional-992282
I1213 09:23:12.376182  398896 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/functional-992282/id_rsa Username:docker}
I1213 09:23:12.484172  398896 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 09:23:14.532031  398896 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.047819129s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-992282 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-992282  │ 9056ab77afb8e │ 4.95MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ localhost/minikube-local-cache-test     │ functional-992282  │ f45b6aa64e2b9 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-992282 image ls --format table --alsologtostderr:
I1213 09:23:16.728491  399008 out.go:360] Setting OutFile to fd 1 ...
I1213 09:23:16.728646  399008 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:23:16.728657  399008 out.go:374] Setting ErrFile to fd 2...
I1213 09:23:16.728664  399008 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:23:16.728923  399008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
I1213 09:23:16.729520  399008 config.go:182] Loaded profile config "functional-992282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 09:23:16.729635  399008 config.go:182] Loaded profile config "functional-992282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 09:23:16.732173  399008 ssh_runner.go:195] Run: systemctl --version
I1213 09:23:16.735211  399008 main.go:143] libmachine: domain functional-992282 has defined MAC address 52:54:00:a6:34:9f in network mk-functional-992282
I1213 09:23:16.735695  399008 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:34:9f", ip: ""} in network mk-functional-992282: {Iface:virbr1 ExpiryTime:2025-12-13 10:19:19 +0000 UTC Type:0 Mac:52:54:00:a6:34:9f Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:functional-992282 Clientid:01:52:54:00:a6:34:9f}
I1213 09:23:16.735765  399008 main.go:143] libmachine: domain functional-992282 has defined IP address 192.168.39.245 and MAC address 52:54:00:a6:34:9f in network mk-functional-992282
I1213 09:23:16.735955  399008 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/functional-992282/id_rsa Username:docker}
I1213 09:23:16.846880  399008 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-992282 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","re
poDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628d
b3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5d
cbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae5
58433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-992282"],"size":"4945146"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"
da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"f45b6aa64e2b92ec6360bf9168662b16d2d1ae947086d61eac6debf951b25df6","repoDigests":["localhost/minikube-local-cache-test@sha256:3aaa23172f09451264f5c52850bb9cbe522c1606ed80c88546ad9015ed3c6772"],"repoTags":["localhost/minikube-local-cache-test:functional-992282"],"size":"3330"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece647
3b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-992282 image ls --format json --alsologtostderr:
I1213 09:23:16.496501  398998 out.go:360] Setting OutFile to fd 1 ...
I1213 09:23:16.496809  398998 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:23:16.496820  398998 out.go:374] Setting ErrFile to fd 2...
I1213 09:23:16.496824  398998 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:23:16.497037  398998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
I1213 09:23:16.497641  398998 config.go:182] Loaded profile config "functional-992282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 09:23:16.497736  398998 config.go:182] Loaded profile config "functional-992282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 09:23:16.499989  398998 ssh_runner.go:195] Run: systemctl --version
I1213 09:23:16.502629  398998 main.go:143] libmachine: domain functional-992282 has defined MAC address 52:54:00:a6:34:9f in network mk-functional-992282
I1213 09:23:16.503116  398998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:34:9f", ip: ""} in network mk-functional-992282: {Iface:virbr1 ExpiryTime:2025-12-13 10:19:19 +0000 UTC Type:0 Mac:52:54:00:a6:34:9f Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:functional-992282 Clientid:01:52:54:00:a6:34:9f}
I1213 09:23:16.503162  398998 main.go:143] libmachine: domain functional-992282 has defined IP address 192.168.39.245 and MAC address 52:54:00:a6:34:9f in network mk-functional-992282
I1213 09:23:16.503358  398998 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/functional-992282/id_rsa Username:docker}
I1213 09:23:16.601026  398998 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-992282 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-992282
size: "4945146"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: f45b6aa64e2b92ec6360bf9168662b16d2d1ae947086d61eac6debf951b25df6
repoDigests:
- localhost/minikube-local-cache-test@sha256:3aaa23172f09451264f5c52850bb9cbe522c1606ed80c88546ad9015ed3c6772
repoTags:
- localhost/minikube-local-cache-test:functional-992282
size: "3330"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-992282 image ls --format yaml --alsologtostderr:
I1213 09:23:14.606225  398933 out.go:360] Setting OutFile to fd 1 ...
I1213 09:23:14.606558  398933 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:23:14.606566  398933 out.go:374] Setting ErrFile to fd 2...
I1213 09:23:14.606572  398933 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:23:14.606873  398933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
I1213 09:23:14.607708  398933 config.go:182] Loaded profile config "functional-992282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 09:23:14.607864  398933 config.go:182] Loaded profile config "functional-992282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 09:23:14.610618  398933 ssh_runner.go:195] Run: systemctl --version
I1213 09:23:14.613600  398933 main.go:143] libmachine: domain functional-992282 has defined MAC address 52:54:00:a6:34:9f in network mk-functional-992282
I1213 09:23:14.614209  398933 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:34:9f", ip: ""} in network mk-functional-992282: {Iface:virbr1 ExpiryTime:2025-12-13 10:19:19 +0000 UTC Type:0 Mac:52:54:00:a6:34:9f Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:functional-992282 Clientid:01:52:54:00:a6:34:9f}
I1213 09:23:14.614252  398933 main.go:143] libmachine: domain functional-992282 has defined IP address 192.168.39.245 and MAC address 52:54:00:a6:34:9f in network mk-functional-992282
I1213 09:23:14.614482  398933 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/functional-992282/id_rsa Username:docker}
I1213 09:23:14.740076  398933 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992282 ssh pgrep buildkitd: exit status 1 (183.921986ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 image build -t localhost/my-image:functional-992282 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-992282 image build -t localhost/my-image:functional-992282 testdata/build --alsologtostderr: (3.792989425s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-992282 image build -t localhost/my-image:functional-992282 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c642d930751
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-992282
--> 7a89ce133c0
Successfully tagged localhost/my-image:functional-992282
7a89ce133c0c60206044530220fb05d9dd702341f4431e5814cf5c55c7a7d5cc
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-992282 image build -t localhost/my-image:functional-992282 testdata/build --alsologtostderr:
I1213 09:23:15.085726  398975 out.go:360] Setting OutFile to fd 1 ...
I1213 09:23:15.086020  398975 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:23:15.086031  398975 out.go:374] Setting ErrFile to fd 2...
I1213 09:23:15.086038  398975 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:23:15.086279  398975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
I1213 09:23:15.086869  398975 config.go:182] Loaded profile config "functional-992282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 09:23:15.087755  398975 config.go:182] Loaded profile config "functional-992282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 09:23:15.090388  398975 ssh_runner.go:195] Run: systemctl --version
I1213 09:23:15.093319  398975 main.go:143] libmachine: domain functional-992282 has defined MAC address 52:54:00:a6:34:9f in network mk-functional-992282
I1213 09:23:15.093847  398975 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a6:34:9f", ip: ""} in network mk-functional-992282: {Iface:virbr1 ExpiryTime:2025-12-13 10:19:19 +0000 UTC Type:0 Mac:52:54:00:a6:34:9f Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:functional-992282 Clientid:01:52:54:00:a6:34:9f}
I1213 09:23:15.093884  398975 main.go:143] libmachine: domain functional-992282 has defined IP address 192.168.39.245 and MAC address 52:54:00:a6:34:9f in network mk-functional-992282
I1213 09:23:15.094090  398975 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/functional-992282/id_rsa Username:docker}
I1213 09:23:15.204413  398975 build_images.go:162] Building image from path: /tmp/build.3854429181.tar
I1213 09:23:15.204485  398975 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 09:23:15.219586  398975 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3854429181.tar
I1213 09:23:15.224632  398975 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3854429181.tar: stat -c "%s %y" /var/lib/minikube/build/build.3854429181.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3854429181.tar': No such file or directory
I1213 09:23:15.224678  398975 ssh_runner.go:362] scp /tmp/build.3854429181.tar --> /var/lib/minikube/build/build.3854429181.tar (3072 bytes)
I1213 09:23:15.267471  398975 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3854429181
I1213 09:23:15.284300  398975 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3854429181 -xf /var/lib/minikube/build/build.3854429181.tar
I1213 09:23:15.297831  398975 crio.go:315] Building image: /var/lib/minikube/build/build.3854429181
I1213 09:23:15.297907  398975 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-992282 /var/lib/minikube/build/build.3854429181 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1213 09:23:18.772242  398975 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-992282 /var/lib/minikube/build/build.3854429181 --cgroup-manager=cgroupfs: (3.474308538s)
I1213 09:23:18.772337  398975 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3854429181
I1213 09:23:18.792853  398975 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3854429181.tar
I1213 09:23:18.806856  398975 build_images.go:218] Built localhost/my-image:functional-992282 from /tmp/build.3854429181.tar
I1213 09:23:18.806897  398975 build_images.go:134] succeeded building to: functional-992282
I1213 09:23:18.806902  398975 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.497637406s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-992282
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 image load --daemon kicbase/echo-server:functional-992282 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-992282 image load --daemon kicbase/echo-server:functional-992282 --alsologtostderr: (1.358755691s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "261.180777ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "71.826376ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 image load --daemon kicbase/echo-server:functional-992282 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "257.733753ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "72.685391ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (24.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-992282 /tmp/TestFunctionalparallelMountCmdany-port3072745938/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765617761450760066" to /tmp/TestFunctionalparallelMountCmdany-port3072745938/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765617761450760066" to /tmp/TestFunctionalparallelMountCmdany-port3072745938/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765617761450760066" to /tmp/TestFunctionalparallelMountCmdany-port3072745938/001/test-1765617761450760066
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992282 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (178.966226ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 09:22:41.630167  391877 retry.go:31] will retry after 532.551567ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 09:22 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 09:22 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 09:22 test-1765617761450760066
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh cat /mount-9p/test-1765617761450760066
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-992282 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [cd9c6ab5-6041-4468-b7dd-d418b36b9bd9] Pending
helpers_test.go:353: "busybox-mount" [cd9c6ab5-6041-4468-b7dd-d418b36b9bd9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [cd9c6ab5-6041-4468-b7dd-d418b36b9bd9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [cd9c6ab5-6041-4468-b7dd-d418b36b9bd9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 22.024439695s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-992282 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-992282 /tmp/TestFunctionalparallelMountCmdany-port3072745938/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (24.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-992282
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 image load --daemon kicbase/echo-server:functional-992282 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 image save kicbase/echo-server:functional-992282 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 image rm kicbase/echo-server:functional-992282 --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-992282 image rm kicbase/echo-server:functional-992282 --alsologtostderr: (1.924571355s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (11.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-992282 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (11.040063179s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (11.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 service list -o json
functional_test.go:1504: Took "408.4218ms" to run "out/minikube-linux-amd64 -p functional-992282 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.245:32748
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.245:32748
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-992282
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 image save --daemon kicbase/echo-server:functional-992282 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-992282
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-992282 /tmp/TestFunctionalparallelMountCmdspecific-port3128360384/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992282 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (260.888057ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 09:23:06.004566  391877 retry.go:31] will retry after 655.231765ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-992282 /tmp/TestFunctionalparallelMountCmdspecific-port3128360384/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992282 ssh "sudo umount -f /mount-9p": exit status 1 (206.37999ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-992282 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-992282 /tmp/TestFunctionalparallelMountCmdspecific-port3128360384/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-992282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1305436074/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-992282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1305436074/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-992282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1305436074/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992282 ssh "findmnt -T" /mount1: exit status 1 (246.743885ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 09:23:07.693823  391877 retry.go:31] will retry after 745.478318ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-992282 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-992282 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-992282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1305436074/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-992282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1305436074/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-992282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1305436074/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-992282
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-992282
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-992282
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22127-387918/.minikube/files/etc/test/nested/copy/391877/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (83.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-553391 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1213 09:23:56.552176  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:24:24.263990  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-553391 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m23.374652882s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (83.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (53.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1213 09:24:49.895185  391877 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-553391 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-553391 --alsologtostderr -v=8: (53.15681442s)
functional_test.go:678: soft start took 53.157247798s for "functional-553391" cluster.
I1213 09:25:43.052409  391877 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (53.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-553391 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-553391 cache add registry.k8s.io/pause:3.1: (1.0564708s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-553391 cache add registry.k8s.io/pause:3.3: (1.03076695s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-553391 cache add registry.k8s.io/pause:latest: (1.044708744s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3823507678/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 cache add minikube-local-cache-test:functional-553391
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-553391 cache add minikube-local-cache-test:functional-553391: (1.60045061s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 cache delete minikube-local-cache-test:functional-553391
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-553391
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553391 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (179.070693ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 kubectl -- --context functional-553391 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-553391 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-553391 logs: (1.233994374s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1269796797/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-553391 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1269796797/001/logs.txt: (1.251944021s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (3.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-553391 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-553391
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-553391: exit status 115 (234.243127ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.38:31618 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-553391 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (3.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553391 config get cpus: exit status 14 (75.981808ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553391 config get cpus: exit status 14 (70.072875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-553391 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-553391 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (120.701301ms)

                                                
                                                
-- stdout --
	* [functional-553391] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:35:53.922962  403262 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:35:53.923262  403262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:35:53.923273  403262 out.go:374] Setting ErrFile to fd 2...
	I1213 09:35:53.923278  403262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:35:53.923522  403262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 09:35:53.923977  403262 out.go:368] Setting JSON to false
	I1213 09:35:53.925000  403262 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4703,"bootTime":1765613851,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:35:53.925061  403262 start.go:143] virtualization: kvm guest
	I1213 09:35:53.927062  403262 out.go:179] * [functional-553391] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:35:53.928806  403262 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 09:35:53.928826  403262 notify.go:221] Checking for updates...
	I1213 09:35:53.931632  403262 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:35:53.933137  403262 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 09:35:53.934500  403262 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	I1213 09:35:53.935884  403262 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:35:53.937231  403262 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:35:53.938902  403262 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:35:53.939488  403262 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:35:53.971657  403262 out.go:179] * Using the kvm2 driver based on existing profile
	I1213 09:35:53.972904  403262 start.go:309] selected driver: kvm2
	I1213 09:35:53.972925  403262 start.go:927] validating driver "kvm2" against &{Name:functional-553391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-553391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:35:53.973047  403262 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:35:53.975258  403262 out.go:203] 
	W1213 09:35:53.977078  403262 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 09:35:53.978502  403262 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-553391 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-553391 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-553391 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (118.152737ms)

                                                
                                                
-- stdout --
	* [functional-553391] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:35:54.159923  403295 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:35:54.160027  403295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:35:54.160032  403295 out.go:374] Setting ErrFile to fd 2...
	I1213 09:35:54.160036  403295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:35:54.160364  403295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 09:35:54.160842  403295 out.go:368] Setting JSON to false
	I1213 09:35:54.161750  403295 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4703,"bootTime":1765613851,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:35:54.161818  403295 start.go:143] virtualization: kvm guest
	I1213 09:35:54.163745  403295 out.go:179] * [functional-553391] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1213 09:35:54.165254  403295 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 09:35:54.165269  403295 notify.go:221] Checking for updates...
	I1213 09:35:54.167675  403295 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:35:54.168945  403295 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 09:35:54.170341  403295 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	I1213 09:35:54.171825  403295 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:35:54.173115  403295 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:35:54.174891  403295 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:35:54.175647  403295 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:35:54.206662  403295 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1213 09:35:54.207891  403295 start.go:309] selected driver: kvm2
	I1213 09:35:54.207911  403295 start.go:927] validating driver "kvm2" against &{Name:functional-553391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-553391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:35:54.208021  403295 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:35:54.210175  403295 out.go:203] 
	W1213 09:35:54.211537  403295 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 09:35:54.212968  403295 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh -n functional-553391 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 cp functional-553391:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp215355195/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh -n functional-553391 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh -n functional-553391 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/391877/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "sudo cat /etc/test/nested/copy/391877/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/391877.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "sudo cat /etc/ssl/certs/391877.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/391877.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "sudo cat /usr/share/ca-certificates/391877.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3918772.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "sudo cat /etc/ssl/certs/3918772.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3918772.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "sudo cat /usr/share/ca-certificates/3918772.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-553391 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553391 ssh "sudo systemctl is-active docker": exit status 1 (174.085541ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553391 ssh "sudo systemctl is-active containerd": exit status 1 (176.615028ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-553391 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-553391
localhost/kicbase/echo-server:functional-553391
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-553391 image ls --format short --alsologtostderr:
I1213 09:40:57.083977  404578 out.go:360] Setting OutFile to fd 1 ...
I1213 09:40:57.084082  404578 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:40:57.084088  404578 out.go:374] Setting ErrFile to fd 2...
I1213 09:40:57.084092  404578 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:40:57.084366  404578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
I1213 09:40:57.085024  404578 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 09:40:57.085116  404578 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 09:40:57.087206  404578 ssh_runner.go:195] Run: systemctl --version
I1213 09:40:57.089699  404578 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
I1213 09:40:57.090084  404578 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
I1213 09:40:57.090109  404578 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
I1213 09:40:57.090290  404578 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/functional-553391/id_rsa Username:docker}
I1213 09:40:57.167658  404578 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-553391 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/kicbase/echo-server           │ functional-553391  │ 9056ab77afb8e │ 4.94MB │
│ localhost/minikube-local-cache-test     │ functional-553391  │ f45b6aa64e2b9 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ localhost/my-image                      │ functional-553391  │ 2187223008721 │ 1.47MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-553391 image ls --format table --alsologtostderr:
I1213 09:41:01.136014  404644 out.go:360] Setting OutFile to fd 1 ...
I1213 09:41:01.136308  404644 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:41:01.136332  404644 out.go:374] Setting ErrFile to fd 2...
I1213 09:41:01.136339  404644 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:41:01.136622  404644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
I1213 09:41:01.137228  404644 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 09:41:01.137318  404644 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 09:41:01.139596  404644 ssh_runner.go:195] Run: systemctl --version
I1213 09:41:01.142151  404644 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
I1213 09:41:01.142728  404644 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
I1213 09:41:01.142763  404644 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
I1213 09:41:01.143010  404644 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/functional-553391/id_rsa Username:docker}
I1213 09:41:01.221732  404644 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-553391 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"2187223008721a896753586883a02a4c1a8e6a96734390d0d18729ede8c77263","repoDigests":["localhost/my-image@sha256:502f4e56991c690dd3651dc46f72ca189d3b79cd22a07987718664a71b0f4c39"],"repoTags":["localhost/my-image:functional-553391"],"size":"1468599"},{"id":"45f3cc72d235f1cfda3de70f
e9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76872535"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-553391"],"size":"4943877"},{"id":"f45b6aa64e2b92ec6360bf9168
662b16d2d1ae947086d61eac6debf951b25df6","repoDigests":["localhost/minikube-local-cache-test@sha256:3aaa23172f09451264f5c52850bb9cbe522c1606ed80c88546ad9015ed3c6772"],"repoTags":["localhost/minikube-local-cache-test:functional-553391"],"size":"3330"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"409467f978b4a30fe7170
12736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"d176089329d4261a6f6f034b00889fde59d7bb5ebef97b60a89da5bf930e382d","repoDigests":["docker.io/library/281aca6e0c661d67f0af558288775fb18a6d385660e176647af1f25860d01e9e-tmp@sha256:0813c069906d3271a5ef91c3ccac727a1df8df7bfda98cbc5c7a873f99b107d3"],"repoTags":[],"size":"1466018"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872
c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["
registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-553391 image ls --format json --alsologtostderr:
I1213 09:41:00.941755  404633 out.go:360] Setting OutFile to fd 1 ...
I1213 09:41:00.941860  404633 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:41:00.941872  404633 out.go:374] Setting ErrFile to fd 2...
I1213 09:41:00.941878  404633 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:41:00.942110  404633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
I1213 09:41:00.942940  404633 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 09:41:00.943050  404633 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 09:41:00.945423  404633 ssh_runner.go:195] Run: systemctl --version
I1213 09:41:00.947923  404633 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
I1213 09:41:00.948453  404633 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
I1213 09:41:00.948481  404633 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
I1213 09:41:00.948612  404633 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/functional-553391/id_rsa Username:docker}
I1213 09:41:01.029899  404633 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-553391 image ls --format yaml --alsologtostderr:
- id: f45b6aa64e2b92ec6360bf9168662b16d2d1ae947086d61eac6debf951b25df6
repoDigests:
- localhost/minikube-local-cache-test@sha256:3aaa23172f09451264f5c52850bb9cbe522c1606ed80c88546ad9015ed3c6772
repoTags:
- localhost/minikube-local-cache-test:functional-553391
size: "3330"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-553391
size: "4943877"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-553391 image ls --format yaml --alsologtostderr:
I1213 09:40:57.271495  404589 out.go:360] Setting OutFile to fd 1 ...
I1213 09:40:57.271615  404589 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:40:57.271622  404589 out.go:374] Setting ErrFile to fd 2...
I1213 09:40:57.271627  404589 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:40:57.271867  404589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
I1213 09:40:57.272533  404589 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 09:40:57.272647  404589 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 09:40:57.275081  404589 ssh_runner.go:195] Run: systemctl --version
I1213 09:40:57.277491  404589 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
I1213 09:40:57.277918  404589 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
I1213 09:40:57.277947  404589 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
I1213 09:40:57.278087  404589 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/functional-553391/id_rsa Username:docker}
I1213 09:40:57.358490  404589 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553391 ssh pgrep buildkitd: exit status 1 (158.265269ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 image build -t localhost/my-image:functional-553391 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-553391 image build -t localhost/my-image:functional-553391 testdata/build --alsologtostderr: (3.123510661s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-553391 image build -t localhost/my-image:functional-553391 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d176089329d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-553391
--> 21872230087
Successfully tagged localhost/my-image:functional-553391
2187223008721a896753586883a02a4c1a8e6a96734390d0d18729ede8c77263
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-553391 image build -t localhost/my-image:functional-553391 testdata/build --alsologtostderr:
I1213 09:40:57.629385  404611 out.go:360] Setting OutFile to fd 1 ...
I1213 09:40:57.629504  404611 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:40:57.629514  404611 out.go:374] Setting ErrFile to fd 2...
I1213 09:40:57.629519  404611 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:40:57.629811  404611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
I1213 09:40:57.630409  404611 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 09:40:57.631090  404611 config.go:182] Loaded profile config "functional-553391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 09:40:57.633411  404611 ssh_runner.go:195] Run: systemctl --version
I1213 09:40:57.635672  404611 main.go:143] libmachine: domain functional-553391 has defined MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
I1213 09:40:57.636178  404611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:cd:d5", ip: ""} in network mk-functional-553391: {Iface:virbr1 ExpiryTime:2025-12-13 10:23:41 +0000 UTC Type:0 Mac:52:54:00:f6:cd:d5 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-553391 Clientid:01:52:54:00:f6:cd:d5}
I1213 09:40:57.636209  404611 main.go:143] libmachine: domain functional-553391 has defined IP address 192.168.39.38 and MAC address 52:54:00:f6:cd:d5 in network mk-functional-553391
I1213 09:40:57.636379  404611 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/functional-553391/id_rsa Username:docker}
I1213 09:40:57.714412  404611 build_images.go:162] Building image from path: /tmp/build.4033243128.tar
I1213 09:40:57.714493  404611 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 09:40:57.727240  404611 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4033243128.tar
I1213 09:40:57.732253  404611 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4033243128.tar: stat -c "%s %y" /var/lib/minikube/build/build.4033243128.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4033243128.tar': No such file or directory
I1213 09:40:57.732291  404611 ssh_runner.go:362] scp /tmp/build.4033243128.tar --> /var/lib/minikube/build/build.4033243128.tar (3072 bytes)
I1213 09:40:57.764864  404611 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4033243128
I1213 09:40:57.776873  404611 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4033243128 -xf /var/lib/minikube/build/build.4033243128.tar
I1213 09:40:57.790248  404611 crio.go:315] Building image: /var/lib/minikube/build/build.4033243128
I1213 09:40:57.790350  404611 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-553391 /var/lib/minikube/build/build.4033243128 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1213 09:41:00.650763  404611 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-553391 /var/lib/minikube/build/build.4033243128 --cgroup-manager=cgroupfs: (2.860374348s)
I1213 09:41:00.650883  404611 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4033243128
I1213 09:41:00.665390  404611 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4033243128.tar
I1213 09:41:00.678778  404611 build_images.go:218] Built localhost/my-image:functional-553391 from /tmp/build.4033243128.tar
I1213 09:41:00.678845  404611 build_images.go:134] succeeded building to: functional-553391
I1213 09:41:00.678870  404611 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-553391
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 image load --daemon kicbase/echo-server:functional-553391 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-553391 image load --daemon kicbase/echo-server:functional-553391 --alsologtostderr: (1.448723759s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "293.531652ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "65.202522ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "322.513506ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "70.208429ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 image load --daemon kicbase/echo-server:functional-553391 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-553391
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 image load --daemon kicbase/echo-server:functional-553391 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 image save kicbase/echo-server:functional-553391 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 image rm kicbase/echo-server:functional-553391 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-553391
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 image save --daemon kicbase/echo-server:functional-553391 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-553391
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2088139448/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553391 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (159.093829ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 09:35:51.271084  391877 retry.go:31] will retry after 427.888138ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2088139448/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553391 ssh "sudo umount -f /mount-9p": exit status 1 (159.821535ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-553391 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2088139448/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3184993173/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3184993173/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3184993173/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-553391 ssh "findmnt -T" /mount1: exit status 1 (172.418164ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 09:35:52.554968  391877 retry.go:31] will retry after 391.643939ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-553391 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3184993173/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3184993173/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-553391 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3184993173/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-553391 service list: (1.201175101s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-553391 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-553391 service list -o json: (1.197842832s)
functional_test.go:1504: Took "1.197965887s" to run "out/minikube-linux-amd64 -p functional-553391 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-553391
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-553391
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-553391
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (208.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1213 09:48:56.551155  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-920998 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m28.248259942s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (208.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-920998 kubectl -- rollout status deployment/busybox: (5.041068645s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 kubectl -- exec busybox-7b57f96db7-2dk47 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 kubectl -- exec busybox-7b57f96db7-cdbb2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 kubectl -- exec busybox-7b57f96db7-p5r5z -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 kubectl -- exec busybox-7b57f96db7-2dk47 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 kubectl -- exec busybox-7b57f96db7-cdbb2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 kubectl -- exec busybox-7b57f96db7-p5r5z -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 kubectl -- exec busybox-7b57f96db7-2dk47 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 kubectl -- exec busybox-7b57f96db7-cdbb2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 kubectl -- exec busybox-7b57f96db7-p5r5z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 kubectl -- exec busybox-7b57f96db7-2dk47 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 kubectl -- exec busybox-7b57f96db7-2dk47 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 kubectl -- exec busybox-7b57f96db7-cdbb2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 kubectl -- exec busybox-7b57f96db7-cdbb2 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 kubectl -- exec busybox-7b57f96db7-p5r5z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 kubectl -- exec busybox-7b57f96db7-p5r5z -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (45.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 node add --alsologtostderr -v 5
E1213 09:51:47.777315  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:51:47.783778  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:51:47.795230  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:51:47.817019  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:51:47.858705  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:51:47.940219  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:51:48.101867  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:51:48.423242  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:51:49.065505  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:51:50.347950  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:51:52.909287  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:51:58.031485  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:51:59.627518  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:52:08.273255  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:52:28.755405  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-920998 node add --alsologtostderr -v 5: (44.906917308s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (45.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-920998 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 cp testdata/cp-test.txt ha-920998:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 cp ha-920998:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3020502136/001/cp-test_ha-920998.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 cp ha-920998:/home/docker/cp-test.txt ha-920998-m02:/home/docker/cp-test_ha-920998_ha-920998-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m02 "sudo cat /home/docker/cp-test_ha-920998_ha-920998-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 cp ha-920998:/home/docker/cp-test.txt ha-920998-m03:/home/docker/cp-test_ha-920998_ha-920998-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m03 "sudo cat /home/docker/cp-test_ha-920998_ha-920998-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 cp ha-920998:/home/docker/cp-test.txt ha-920998-m04:/home/docker/cp-test_ha-920998_ha-920998-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m04 "sudo cat /home/docker/cp-test_ha-920998_ha-920998-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 cp testdata/cp-test.txt ha-920998-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 cp ha-920998-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3020502136/001/cp-test_ha-920998-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 cp ha-920998-m02:/home/docker/cp-test.txt ha-920998:/home/docker/cp-test_ha-920998-m02_ha-920998.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998 "sudo cat /home/docker/cp-test_ha-920998-m02_ha-920998.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 cp ha-920998-m02:/home/docker/cp-test.txt ha-920998-m03:/home/docker/cp-test_ha-920998-m02_ha-920998-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m03 "sudo cat /home/docker/cp-test_ha-920998-m02_ha-920998-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 cp ha-920998-m02:/home/docker/cp-test.txt ha-920998-m04:/home/docker/cp-test_ha-920998-m02_ha-920998-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m04 "sudo cat /home/docker/cp-test_ha-920998-m02_ha-920998-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 cp testdata/cp-test.txt ha-920998-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 cp ha-920998-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3020502136/001/cp-test_ha-920998-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 cp ha-920998-m03:/home/docker/cp-test.txt ha-920998:/home/docker/cp-test_ha-920998-m03_ha-920998.txt
E1213 09:52:37.813809  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998 "sudo cat /home/docker/cp-test_ha-920998-m03_ha-920998.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 cp ha-920998-m03:/home/docker/cp-test.txt ha-920998-m02:/home/docker/cp-test_ha-920998-m03_ha-920998-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m02 "sudo cat /home/docker/cp-test_ha-920998-m03_ha-920998-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 cp ha-920998-m03:/home/docker/cp-test.txt ha-920998-m04:/home/docker/cp-test_ha-920998-m03_ha-920998-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m04 "sudo cat /home/docker/cp-test_ha-920998-m03_ha-920998-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 cp testdata/cp-test.txt ha-920998-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 cp ha-920998-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3020502136/001/cp-test_ha-920998-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 cp ha-920998-m04:/home/docker/cp-test.txt ha-920998:/home/docker/cp-test_ha-920998-m04_ha-920998.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998 "sudo cat /home/docker/cp-test_ha-920998-m04_ha-920998.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 cp ha-920998-m04:/home/docker/cp-test.txt ha-920998-m02:/home/docker/cp-test_ha-920998-m04_ha-920998-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m02 "sudo cat /home/docker/cp-test_ha-920998-m04_ha-920998-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 cp ha-920998-m04:/home/docker/cp-test.txt ha-920998-m03:/home/docker/cp-test_ha-920998-m04_ha-920998-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 ssh -n ha-920998-m03 "sudo cat /home/docker/cp-test_ha-920998-m04_ha-920998-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (87.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 node stop m02 --alsologtostderr -v 5
E1213 09:53:09.717238  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:53:56.555825  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-920998 node stop m02 --alsologtostderr -v 5: (1m27.14486015s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-920998 status --alsologtostderr -v 5: exit status 7 (500.519993ms)

                                                
                                                
-- stdout --
	ha-920998
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-920998-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-920998-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-920998-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:54:09.400857  409434 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:54:09.401135  409434 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:54:09.401147  409434 out.go:374] Setting ErrFile to fd 2...
	I1213 09:54:09.401153  409434 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:54:09.401379  409434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 09:54:09.401582  409434 out.go:368] Setting JSON to false
	I1213 09:54:09.401615  409434 mustload.go:66] Loading cluster: ha-920998
	I1213 09:54:09.401729  409434 notify.go:221] Checking for updates...
	I1213 09:54:09.402026  409434 config.go:182] Loaded profile config "ha-920998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:54:09.402046  409434 status.go:174] checking status of ha-920998 ...
	I1213 09:54:09.404365  409434 status.go:371] ha-920998 host status = "Running" (err=<nil>)
	I1213 09:54:09.404387  409434 host.go:66] Checking if "ha-920998" exists ...
	I1213 09:54:09.407618  409434 main.go:143] libmachine: domain ha-920998 has defined MAC address 52:54:00:39:0e:33 in network mk-ha-920998
	I1213 09:54:09.408167  409434 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0e:33", ip: ""} in network mk-ha-920998: {Iface:virbr1 ExpiryTime:2025-12-13 10:48:21 +0000 UTC Type:0 Mac:52:54:00:39:0e:33 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-920998 Clientid:01:52:54:00:39:0e:33}
	I1213 09:54:09.408200  409434 main.go:143] libmachine: domain ha-920998 has defined IP address 192.168.39.222 and MAC address 52:54:00:39:0e:33 in network mk-ha-920998
	I1213 09:54:09.408365  409434 host.go:66] Checking if "ha-920998" exists ...
	I1213 09:54:09.408628  409434 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:54:09.411104  409434 main.go:143] libmachine: domain ha-920998 has defined MAC address 52:54:00:39:0e:33 in network mk-ha-920998
	I1213 09:54:09.411526  409434 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:39:0e:33", ip: ""} in network mk-ha-920998: {Iface:virbr1 ExpiryTime:2025-12-13 10:48:21 +0000 UTC Type:0 Mac:52:54:00:39:0e:33 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-920998 Clientid:01:52:54:00:39:0e:33}
	I1213 09:54:09.411558  409434 main.go:143] libmachine: domain ha-920998 has defined IP address 192.168.39.222 and MAC address 52:54:00:39:0e:33 in network mk-ha-920998
	I1213 09:54:09.411713  409434 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/ha-920998/id_rsa Username:docker}
	I1213 09:54:09.498040  409434 ssh_runner.go:195] Run: systemctl --version
	I1213 09:54:09.504551  409434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:54:09.523397  409434 kubeconfig.go:125] found "ha-920998" server: "https://192.168.39.254:8443"
	I1213 09:54:09.523444  409434 api_server.go:166] Checking apiserver status ...
	I1213 09:54:09.523495  409434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:54:09.545934  409434 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	W1213 09:54:09.559851  409434 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:54:09.559923  409434 ssh_runner.go:195] Run: ls
	I1213 09:54:09.564925  409434 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1213 09:54:09.571931  409434 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1213 09:54:09.571956  409434 status.go:463] ha-920998 apiserver status = Running (err=<nil>)
	I1213 09:54:09.571969  409434 status.go:176] ha-920998 status: &{Name:ha-920998 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:54:09.571992  409434 status.go:174] checking status of ha-920998-m02 ...
	I1213 09:54:09.573798  409434 status.go:371] ha-920998-m02 host status = "Stopped" (err=<nil>)
	I1213 09:54:09.573817  409434 status.go:384] host is not running, skipping remaining checks
	I1213 09:54:09.573826  409434 status.go:176] ha-920998-m02 status: &{Name:ha-920998-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:54:09.573846  409434 status.go:174] checking status of ha-920998-m03 ...
	I1213 09:54:09.575175  409434 status.go:371] ha-920998-m03 host status = "Running" (err=<nil>)
	I1213 09:54:09.575196  409434 host.go:66] Checking if "ha-920998-m03" exists ...
	I1213 09:54:09.577590  409434 main.go:143] libmachine: domain ha-920998-m03 has defined MAC address 52:54:00:b0:71:c7 in network mk-ha-920998
	I1213 09:54:09.578078  409434 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b0:71:c7", ip: ""} in network mk-ha-920998: {Iface:virbr1 ExpiryTime:2025-12-13 10:50:29 +0000 UTC Type:0 Mac:52:54:00:b0:71:c7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-920998-m03 Clientid:01:52:54:00:b0:71:c7}
	I1213 09:54:09.578102  409434 main.go:143] libmachine: domain ha-920998-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:b0:71:c7 in network mk-ha-920998
	I1213 09:54:09.578254  409434 host.go:66] Checking if "ha-920998-m03" exists ...
	I1213 09:54:09.578501  409434 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:54:09.580453  409434 main.go:143] libmachine: domain ha-920998-m03 has defined MAC address 52:54:00:b0:71:c7 in network mk-ha-920998
	I1213 09:54:09.580864  409434 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b0:71:c7", ip: ""} in network mk-ha-920998: {Iface:virbr1 ExpiryTime:2025-12-13 10:50:29 +0000 UTC Type:0 Mac:52:54:00:b0:71:c7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-920998-m03 Clientid:01:52:54:00:b0:71:c7}
	I1213 09:54:09.580885  409434 main.go:143] libmachine: domain ha-920998-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:b0:71:c7 in network mk-ha-920998
	I1213 09:54:09.581046  409434 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/ha-920998-m03/id_rsa Username:docker}
	I1213 09:54:09.663715  409434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:54:09.683421  409434 kubeconfig.go:125] found "ha-920998" server: "https://192.168.39.254:8443"
	I1213 09:54:09.683460  409434 api_server.go:166] Checking apiserver status ...
	I1213 09:54:09.683511  409434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:54:09.703684  409434 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1787/cgroup
	W1213 09:54:09.714517  409434 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1787/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:54:09.714586  409434 ssh_runner.go:195] Run: ls
	I1213 09:54:09.719377  409434 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1213 09:54:09.724308  409434 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1213 09:54:09.724346  409434 status.go:463] ha-920998-m03 apiserver status = Running (err=<nil>)
	I1213 09:54:09.724358  409434 status.go:176] ha-920998-m03 status: &{Name:ha-920998-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:54:09.724377  409434 status.go:174] checking status of ha-920998-m04 ...
	I1213 09:54:09.725845  409434 status.go:371] ha-920998-m04 host status = "Running" (err=<nil>)
	I1213 09:54:09.725862  409434 host.go:66] Checking if "ha-920998-m04" exists ...
	I1213 09:54:09.728090  409434 main.go:143] libmachine: domain ha-920998-m04 has defined MAC address 52:54:00:aa:c2:5f in network mk-ha-920998
	I1213 09:54:09.728485  409434 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:c2:5f", ip: ""} in network mk-ha-920998: {Iface:virbr1 ExpiryTime:2025-12-13 10:52:00 +0000 UTC Type:0 Mac:52:54:00:aa:c2:5f Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-920998-m04 Clientid:01:52:54:00:aa:c2:5f}
	I1213 09:54:09.728509  409434 main.go:143] libmachine: domain ha-920998-m04 has defined IP address 192.168.39.59 and MAC address 52:54:00:aa:c2:5f in network mk-ha-920998
	I1213 09:54:09.728645  409434 host.go:66] Checking if "ha-920998-m04" exists ...
	I1213 09:54:09.728886  409434 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:54:09.730797  409434 main.go:143] libmachine: domain ha-920998-m04 has defined MAC address 52:54:00:aa:c2:5f in network mk-ha-920998
	I1213 09:54:09.731239  409434 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:c2:5f", ip: ""} in network mk-ha-920998: {Iface:virbr1 ExpiryTime:2025-12-13 10:52:00 +0000 UTC Type:0 Mac:52:54:00:aa:c2:5f Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-920998-m04 Clientid:01:52:54:00:aa:c2:5f}
	I1213 09:54:09.731314  409434 main.go:143] libmachine: domain ha-920998-m04 has defined IP address 192.168.39.59 and MAC address 52:54:00:aa:c2:5f in network mk-ha-920998
	I1213 09:54:09.731503  409434 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/ha-920998-m04/id_rsa Username:docker}
	I1213 09:54:09.819535  409434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:54:09.838264  409434 status.go:176] ha-920998-m04 status: &{Name:ha-920998-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (87.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (44.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 node start m02 --alsologtostderr -v 5
E1213 09:54:31.639695  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-920998 node start m02 --alsologtostderr -v 5: (43.566902975s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (44.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (379.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 stop --alsologtostderr -v 5
E1213 09:56:47.776409  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:57:15.488601  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:57:37.813667  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:58:56.555538  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-920998 stop --alsologtostderr -v 5: (4m17.072429433s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 start --wait true --alsologtostderr -v 5
E1213 10:00:40.886072  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-920998 start --wait true --alsologtostderr -v 5: (2m1.869802652s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (379.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-920998 node delete m03 --alsologtostderr -v 5: (17.368315065s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (243.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 stop --alsologtostderr -v 5
E1213 10:01:47.777026  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:02:37.814146  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:03:56.552298  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-920998 stop --alsologtostderr -v 5: (4m3.303653207s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-920998 status --alsologtostderr -v 5: exit status 7 (66.13957ms)

                                                
                                                
-- stdout --
	ha-920998
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-920998-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-920998-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:05:36.627627  412698 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:05:36.627910  412698 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:05:36.627920  412698 out.go:374] Setting ErrFile to fd 2...
	I1213 10:05:36.627925  412698 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:05:36.628102  412698 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 10:05:36.628279  412698 out.go:368] Setting JSON to false
	I1213 10:05:36.628314  412698 mustload.go:66] Loading cluster: ha-920998
	I1213 10:05:36.628434  412698 notify.go:221] Checking for updates...
	I1213 10:05:36.628788  412698 config.go:182] Loaded profile config "ha-920998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:05:36.628806  412698 status.go:174] checking status of ha-920998 ...
	I1213 10:05:36.630827  412698 status.go:371] ha-920998 host status = "Stopped" (err=<nil>)
	I1213 10:05:36.630843  412698 status.go:384] host is not running, skipping remaining checks
	I1213 10:05:36.630847  412698 status.go:176] ha-920998 status: &{Name:ha-920998 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 10:05:36.630863  412698 status.go:174] checking status of ha-920998-m02 ...
	I1213 10:05:36.632101  412698 status.go:371] ha-920998-m02 host status = "Stopped" (err=<nil>)
	I1213 10:05:36.632116  412698 status.go:384] host is not running, skipping remaining checks
	I1213 10:05:36.632121  412698 status.go:176] ha-920998-m02 status: &{Name:ha-920998-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 10:05:36.632133  412698 status.go:174] checking status of ha-920998-m04 ...
	I1213 10:05:36.633311  412698 status.go:371] ha-920998-m04 host status = "Stopped" (err=<nil>)
	I1213 10:05:36.633338  412698 status.go:384] host is not running, skipping remaining checks
	I1213 10:05:36.633343  412698 status.go:176] ha-920998-m04 status: &{Name:ha-920998-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (243.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (85.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1213 10:06:47.776480  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-920998 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m24.578340611s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (85.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 node add --control-plane --alsologtostderr -v 5
E1213 10:07:37.813336  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:08:10.850584  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-920998 node add --control-plane --alsologtostderr -v 5: (1m15.320572375s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-920998 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (49s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-855993 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1213 10:08:39.630632  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:08:56.550851  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-855993 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (49.004013649s)
--- PASS: TestJSONOutput/start/Command (49.00s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-855993 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-855993 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.93s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-855993 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-855993 --output=json --user=testUser: (6.929490777s)
--- PASS: TestJSONOutput/stop/Command (6.93s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-266681 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-266681 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (82.02967ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ddab702c-8018-43b9-8874-15ca097077a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-266681] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8fa121ba-c356-43b9-a7ba-071fcb891acd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22127"}}
	{"specversion":"1.0","id":"44781610-7181-442d-a18e-68957d54d09e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0d0f79ae-96b9-40da-ad20-cf5046fe6c7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig"}}
	{"specversion":"1.0","id":"8f7332aa-42db-4525-96f2-f6e752a62eed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube"}}
	{"specversion":"1.0","id":"f475787b-b39b-4b33-9881-5ae0d83c4bc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0367ba7c-1a9e-4d05-9b53-e65a33a356b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c4465267-7052-4dff-8a3f-d87c55dd657a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-266681" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-266681
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (76.77s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-360544 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-360544 --driver=kvm2  --container-runtime=crio: (37.296316387s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-362672 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-362672 --driver=kvm2  --container-runtime=crio: (36.848647245s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-360544
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-362672
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-362672" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-362672
helpers_test.go:176: Cleaning up "first-360544" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-360544
--- PASS: TestMinikubeProfile (76.77s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.07s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-837912 --memory=3072 --mount-string /tmp/TestMountStartserial4181445866/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-837912 --memory=3072 --mount-string /tmp/TestMountStartserial4181445866/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.072888472s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-837912 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-837912 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-857428 --memory=3072 --mount-string /tmp/TestMountStartserial4181445866/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-857428 --memory=3072 --mount-string /tmp/TestMountStartserial4181445866/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.225186825s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-857428 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-857428 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-837912 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-857428 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-857428 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-857428
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-857428: (1.256802719s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.26s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-857428
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-857428: (17.25557669s)
--- PASS: TestMountStart/serial/RestartStopped (18.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-857428 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-857428 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (99.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-501861 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1213 10:11:47.777070  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:12:37.813823  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-501861 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m39.45562431s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (99.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501861 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501861 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-501861 -- rollout status deployment/busybox: (4.383944641s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501861 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501861 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501861 -- exec busybox-7b57f96db7-fzl4j -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501861 -- exec busybox-7b57f96db7-nzfxn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501861 -- exec busybox-7b57f96db7-fzl4j -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501861 -- exec busybox-7b57f96db7-nzfxn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501861 -- exec busybox-7b57f96db7-fzl4j -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501861 -- exec busybox-7b57f96db7-nzfxn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.11s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501861 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501861 -- exec busybox-7b57f96db7-fzl4j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501861 -- exec busybox-7b57f96db7-fzl4j -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501861 -- exec busybox-7b57f96db7-nzfxn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-501861 -- exec busybox-7b57f96db7-nzfxn -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (40.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-501861 -v=5 --alsologtostderr
E1213 10:13:56.552182  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-501861 -v=5 --alsologtostderr: (39.930074297s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (40.38s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-501861 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 cp testdata/cp-test.txt multinode-501861:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 ssh -n multinode-501861 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 cp multinode-501861:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3115956985/001/cp-test_multinode-501861.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 ssh -n multinode-501861 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 cp multinode-501861:/home/docker/cp-test.txt multinode-501861-m02:/home/docker/cp-test_multinode-501861_multinode-501861-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 ssh -n multinode-501861 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 ssh -n multinode-501861-m02 "sudo cat /home/docker/cp-test_multinode-501861_multinode-501861-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 cp multinode-501861:/home/docker/cp-test.txt multinode-501861-m03:/home/docker/cp-test_multinode-501861_multinode-501861-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 ssh -n multinode-501861 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 ssh -n multinode-501861-m03 "sudo cat /home/docker/cp-test_multinode-501861_multinode-501861-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 cp testdata/cp-test.txt multinode-501861-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 ssh -n multinode-501861-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 cp multinode-501861-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3115956985/001/cp-test_multinode-501861-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 ssh -n multinode-501861-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 cp multinode-501861-m02:/home/docker/cp-test.txt multinode-501861:/home/docker/cp-test_multinode-501861-m02_multinode-501861.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 ssh -n multinode-501861-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 ssh -n multinode-501861 "sudo cat /home/docker/cp-test_multinode-501861-m02_multinode-501861.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 cp multinode-501861-m02:/home/docker/cp-test.txt multinode-501861-m03:/home/docker/cp-test_multinode-501861-m02_multinode-501861-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 ssh -n multinode-501861-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 ssh -n multinode-501861-m03 "sudo cat /home/docker/cp-test_multinode-501861-m02_multinode-501861-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 cp testdata/cp-test.txt multinode-501861-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 ssh -n multinode-501861-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 cp multinode-501861-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3115956985/001/cp-test_multinode-501861-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 ssh -n multinode-501861-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 cp multinode-501861-m03:/home/docker/cp-test.txt multinode-501861:/home/docker/cp-test_multinode-501861-m03_multinode-501861.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 ssh -n multinode-501861-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 ssh -n multinode-501861 "sudo cat /home/docker/cp-test_multinode-501861-m03_multinode-501861.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 cp multinode-501861-m03:/home/docker/cp-test.txt multinode-501861-m02:/home/docker/cp-test_multinode-501861-m03_multinode-501861-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 ssh -n multinode-501861-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 ssh -n multinode-501861-m02 "sudo cat /home/docker/cp-test_multinode-501861-m03_multinode-501861-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-501861 node stop m03: (1.5503197s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-501861 status: exit status 7 (344.004903ms)

                                                
                                                
-- stdout --
	multinode-501861
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-501861-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-501861-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-501861 status --alsologtostderr: exit status 7 (329.396025ms)

                                                
                                                
-- stdout --
	multinode-501861
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-501861-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-501861-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:14:14.635407  418047 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:14:14.635643  418047 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:14:14.635651  418047 out.go:374] Setting ErrFile to fd 2...
	I1213 10:14:14.635655  418047 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:14:14.635878  418047 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 10:14:14.636053  418047 out.go:368] Setting JSON to false
	I1213 10:14:14.636076  418047 mustload.go:66] Loading cluster: multinode-501861
	I1213 10:14:14.636183  418047 notify.go:221] Checking for updates...
	I1213 10:14:14.636423  418047 config.go:182] Loaded profile config "multinode-501861": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:14:14.636438  418047 status.go:174] checking status of multinode-501861 ...
	I1213 10:14:14.638584  418047 status.go:371] multinode-501861 host status = "Running" (err=<nil>)
	I1213 10:14:14.638604  418047 host.go:66] Checking if "multinode-501861" exists ...
	I1213 10:14:14.641632  418047 main.go:143] libmachine: domain multinode-501861 has defined MAC address 52:54:00:ef:88:44 in network mk-multinode-501861
	I1213 10:14:14.642188  418047 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ef:88:44", ip: ""} in network mk-multinode-501861: {Iface:virbr1 ExpiryTime:2025-12-13 11:11:53 +0000 UTC Type:0 Mac:52:54:00:ef:88:44 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-501861 Clientid:01:52:54:00:ef:88:44}
	I1213 10:14:14.642229  418047 main.go:143] libmachine: domain multinode-501861 has defined IP address 192.168.39.175 and MAC address 52:54:00:ef:88:44 in network mk-multinode-501861
	I1213 10:14:14.642402  418047 host.go:66] Checking if "multinode-501861" exists ...
	I1213 10:14:14.642728  418047 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:14:14.645271  418047 main.go:143] libmachine: domain multinode-501861 has defined MAC address 52:54:00:ef:88:44 in network mk-multinode-501861
	I1213 10:14:14.645725  418047 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ef:88:44", ip: ""} in network mk-multinode-501861: {Iface:virbr1 ExpiryTime:2025-12-13 11:11:53 +0000 UTC Type:0 Mac:52:54:00:ef:88:44 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:multinode-501861 Clientid:01:52:54:00:ef:88:44}
	I1213 10:14:14.645773  418047 main.go:143] libmachine: domain multinode-501861 has defined IP address 192.168.39.175 and MAC address 52:54:00:ef:88:44 in network mk-multinode-501861
	I1213 10:14:14.645932  418047 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/multinode-501861/id_rsa Username:docker}
	I1213 10:14:14.725445  418047 ssh_runner.go:195] Run: systemctl --version
	I1213 10:14:14.731711  418047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:14:14.749041  418047 kubeconfig.go:125] found "multinode-501861" server: "https://192.168.39.175:8443"
	I1213 10:14:14.749101  418047 api_server.go:166] Checking apiserver status ...
	I1213 10:14:14.749148  418047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:14:14.769644  418047 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup
	W1213 10:14:14.782941  418047 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:14:14.783018  418047 ssh_runner.go:195] Run: ls
	I1213 10:14:14.788295  418047 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I1213 10:14:14.793118  418047 api_server.go:279] https://192.168.39.175:8443/healthz returned 200:
	ok
	I1213 10:14:14.793144  418047 status.go:463] multinode-501861 apiserver status = Running (err=<nil>)
	I1213 10:14:14.793164  418047 status.go:176] multinode-501861 status: &{Name:multinode-501861 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 10:14:14.793185  418047 status.go:174] checking status of multinode-501861-m02 ...
	I1213 10:14:14.794720  418047 status.go:371] multinode-501861-m02 host status = "Running" (err=<nil>)
	I1213 10:14:14.794738  418047 host.go:66] Checking if "multinode-501861-m02" exists ...
	I1213 10:14:14.797541  418047 main.go:143] libmachine: domain multinode-501861-m02 has defined MAC address 52:54:00:63:00:60 in network mk-multinode-501861
	I1213 10:14:14.797956  418047 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:63:00:60", ip: ""} in network mk-multinode-501861: {Iface:virbr1 ExpiryTime:2025-12-13 11:12:49 +0000 UTC Type:0 Mac:52:54:00:63:00:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-501861-m02 Clientid:01:52:54:00:63:00:60}
	I1213 10:14:14.797981  418047 main.go:143] libmachine: domain multinode-501861-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:63:00:60 in network mk-multinode-501861
	I1213 10:14:14.798106  418047 host.go:66] Checking if "multinode-501861-m02" exists ...
	I1213 10:14:14.798302  418047 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:14:14.800229  418047 main.go:143] libmachine: domain multinode-501861-m02 has defined MAC address 52:54:00:63:00:60 in network mk-multinode-501861
	I1213 10:14:14.800571  418047 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:63:00:60", ip: ""} in network mk-multinode-501861: {Iface:virbr1 ExpiryTime:2025-12-13 11:12:49 +0000 UTC Type:0 Mac:52:54:00:63:00:60 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-501861-m02 Clientid:01:52:54:00:63:00:60}
	I1213 10:14:14.800592  418047 main.go:143] libmachine: domain multinode-501861-m02 has defined IP address 192.168.39.43 and MAC address 52:54:00:63:00:60 in network mk-multinode-501861
	I1213 10:14:14.800745  418047 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/multinode-501861-m02/id_rsa Username:docker}
	I1213 10:14:14.883839  418047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:14:14.900407  418047 status.go:176] multinode-501861-m02 status: &{Name:multinode-501861-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1213 10:14:14.900471  418047 status.go:174] checking status of multinode-501861-m03 ...
	I1213 10:14:14.902140  418047 status.go:371] multinode-501861-m03 host status = "Stopped" (err=<nil>)
	I1213 10:14:14.902161  418047 status.go:384] host is not running, skipping remaining checks
	I1213 10:14:14.902168  418047 status.go:176] multinode-501861-m03 status: &{Name:multinode-501861-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-501861 node start m03 -v=5 --alsologtostderr: (36.779107553s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (288s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-501861
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-501861
E1213 10:16:47.777462  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:17:20.889777  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-501861: (2m43.018141928s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-501861 --wait=true -v=5 --alsologtostderr
E1213 10:17:37.814075  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:18:56.551707  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-501861 --wait=true -v=5 --alsologtostderr: (2m4.850398593s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-501861
--- PASS: TestMultiNode/serial/RestartKeepsNodes (288.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-501861 node delete m03: (2.228817069s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (169.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 stop
E1213 10:21:47.776605  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-501861 stop: (2m49.72584587s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-501861 status: exit status 7 (68.756468ms)

                                                
                                                
-- stdout --
	multinode-501861
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-501861-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-501861 status --alsologtostderr: exit status 7 (65.323016ms)

                                                
                                                
-- stdout --
	multinode-501861
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-501861-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:22:32.742005  420829 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:22:32.742273  420829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:22:32.742281  420829 out.go:374] Setting ErrFile to fd 2...
	I1213 10:22:32.742285  420829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:22:32.742510  420829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 10:22:32.742678  420829 out.go:368] Setting JSON to false
	I1213 10:22:32.742704  420829 mustload.go:66] Loading cluster: multinode-501861
	I1213 10:22:32.742770  420829 notify.go:221] Checking for updates...
	I1213 10:22:32.743059  420829 config.go:182] Loaded profile config "multinode-501861": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:22:32.743074  420829 status.go:174] checking status of multinode-501861 ...
	I1213 10:22:32.745093  420829 status.go:371] multinode-501861 host status = "Stopped" (err=<nil>)
	I1213 10:22:32.745109  420829 status.go:384] host is not running, skipping remaining checks
	I1213 10:22:32.745113  420829 status.go:176] multinode-501861 status: &{Name:multinode-501861 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 10:22:32.745130  420829 status.go:174] checking status of multinode-501861-m02 ...
	I1213 10:22:32.746414  420829 status.go:371] multinode-501861-m02 host status = "Stopped" (err=<nil>)
	I1213 10:22:32.746428  420829 status.go:384] host is not running, skipping remaining checks
	I1213 10:22:32.746433  420829 status.go:176] multinode-501861-m02 status: &{Name:multinode-501861-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (169.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (87.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-501861 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1213 10:22:37.813833  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:23:56.551236  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-501861 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m26.842855526s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-501861 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (87.32s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-501861
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-501861-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-501861-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (80.517897ms)

                                                
                                                
-- stdout --
	* [multinode-501861-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-501861-m02' is duplicated with machine name 'multinode-501861-m02' in profile 'multinode-501861'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-501861-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-501861-m03 --driver=kvm2  --container-runtime=crio: (38.533842913s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-501861
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-501861: exit status 80 (211.479301ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-501861 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-501861-m03 already exists in multinode-501861-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-501861-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.73s)

                                                
                                    
x
+
TestScheduledStopUnix (105.86s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-489669 --memory=3072 --driver=kvm2  --container-runtime=crio
E1213 10:26:47.776747  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-489669 --memory=3072 --driver=kvm2  --container-runtime=crio: (34.166806859s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-489669 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 10:27:15.475681  423011 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:27:15.475951  423011 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:15.475961  423011 out.go:374] Setting ErrFile to fd 2...
	I1213 10:27:15.475965  423011 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:15.476156  423011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 10:27:15.476411  423011 out.go:368] Setting JSON to false
	I1213 10:27:15.476492  423011 mustload.go:66] Loading cluster: scheduled-stop-489669
	I1213 10:27:15.476781  423011 config.go:182] Loaded profile config "scheduled-stop-489669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:27:15.476844  423011 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/scheduled-stop-489669/config.json ...
	I1213 10:27:15.477007  423011 mustload.go:66] Loading cluster: scheduled-stop-489669
	I1213 10:27:15.477107  423011 config.go:182] Loaded profile config "scheduled-stop-489669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-489669 -n scheduled-stop-489669
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-489669 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 10:27:15.763556  423055 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:27:15.763813  423055 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:15.763823  423055 out.go:374] Setting ErrFile to fd 2...
	I1213 10:27:15.763827  423055 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:15.764030  423055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 10:27:15.764268  423055 out.go:368] Setting JSON to false
	I1213 10:27:15.764514  423055 daemonize_unix.go:73] killing process 423044 as it is an old scheduled stop
	I1213 10:27:15.764636  423055 mustload.go:66] Loading cluster: scheduled-stop-489669
	I1213 10:27:15.765091  423055 config.go:182] Loaded profile config "scheduled-stop-489669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:27:15.765180  423055 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/scheduled-stop-489669/config.json ...
	I1213 10:27:15.765408  423055 mustload.go:66] Loading cluster: scheduled-stop-489669
	I1213 10:27:15.765538  423055 config.go:182] Loaded profile config "scheduled-stop-489669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1213 10:27:15.770883  391877 retry.go:31] will retry after 111.752µs: open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/scheduled-stop-489669/pid: no such file or directory
I1213 10:27:15.772044  391877 retry.go:31] will retry after 131.051µs: open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/scheduled-stop-489669/pid: no such file or directory
I1213 10:27:15.773199  391877 retry.go:31] will retry after 141.077µs: open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/scheduled-stop-489669/pid: no such file or directory
I1213 10:27:15.774339  391877 retry.go:31] will retry after 294.878µs: open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/scheduled-stop-489669/pid: no such file or directory
I1213 10:27:15.775469  391877 retry.go:31] will retry after 612.56µs: open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/scheduled-stop-489669/pid: no such file or directory
I1213 10:27:15.776601  391877 retry.go:31] will retry after 594.03µs: open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/scheduled-stop-489669/pid: no such file or directory
I1213 10:27:15.777741  391877 retry.go:31] will retry after 1.672459ms: open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/scheduled-stop-489669/pid: no such file or directory
I1213 10:27:15.779985  391877 retry.go:31] will retry after 991.427µs: open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/scheduled-stop-489669/pid: no such file or directory
I1213 10:27:15.781111  391877 retry.go:31] will retry after 3.321311ms: open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/scheduled-stop-489669/pid: no such file or directory
I1213 10:27:15.785354  391877 retry.go:31] will retry after 3.554183ms: open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/scheduled-stop-489669/pid: no such file or directory
I1213 10:27:15.789609  391877 retry.go:31] will retry after 4.667972ms: open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/scheduled-stop-489669/pid: no such file or directory
I1213 10:27:15.794861  391877 retry.go:31] will retry after 6.647895ms: open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/scheduled-stop-489669/pid: no such file or directory
I1213 10:27:15.802083  391877 retry.go:31] will retry after 12.600877ms: open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/scheduled-stop-489669/pid: no such file or directory
I1213 10:27:15.815360  391877 retry.go:31] will retry after 14.467211ms: open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/scheduled-stop-489669/pid: no such file or directory
I1213 10:27:15.830657  391877 retry.go:31] will retry after 39.53498ms: open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/scheduled-stop-489669/pid: no such file or directory
I1213 10:27:15.870994  391877 retry.go:31] will retry after 63.323505ms: open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/scheduled-stop-489669/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-489669 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1213 10:27:37.813676  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-489669 -n scheduled-stop-489669
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-489669
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-489669 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 10:27:41.509037  423205 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:27:41.509318  423205 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:41.509339  423205 out.go:374] Setting ErrFile to fd 2...
	I1213 10:27:41.509344  423205 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:41.509552  423205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 10:27:41.509804  423205 out.go:368] Setting JSON to false
	I1213 10:27:41.509882  423205 mustload.go:66] Loading cluster: scheduled-stop-489669
	I1213 10:27:41.510193  423205 config.go:182] Loaded profile config "scheduled-stop-489669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:27:41.510268  423205 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/scheduled-stop-489669/config.json ...
	I1213 10:27:41.510474  423205 mustload.go:66] Loading cluster: scheduled-stop-489669
	I1213 10:27:41.510568  423205 config.go:182] Loaded profile config "scheduled-stop-489669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-489669
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-489669: exit status 7 (65.202131ms)

                                                
                                                
-- stdout --
	scheduled-stop-489669
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-489669 -n scheduled-stop-489669
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-489669 -n scheduled-stop-489669: exit status 7 (67.346277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-489669" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-489669
--- PASS: TestScheduledStopUnix (105.86s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (367.01s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3397202985 start -p running-upgrade-689860 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E1213 10:28:56.551736  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3397202985 start -p running-upgrade-689860 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m35.248337514s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-689860 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-689860 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (4m30.036001676s)
helpers_test.go:176: Cleaning up "running-upgrade-689860" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-689860
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-689860: (1.014062503s)
--- PASS: TestRunningBinaryUpgrade (367.01s)

                                                
                                    
x
+
TestKubernetesUpgrade (160.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-695716 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-695716 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.472377203s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-695716
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-695716: (2.026398653s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-695716 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-695716 status --format={{.Host}}: exit status 7 (70.655891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-695716 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-695716 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m6.283611548s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-695716 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-695716 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-695716 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (93.937014ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-695716] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-695716
	    minikube start -p kubernetes-upgrade-695716 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6957162 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-695716 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-695716 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-695716 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.493071489s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-695716" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-695716
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-695716: (1.002048038s)
--- PASS: TestKubernetesUpgrade (160.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-620455 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-620455 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (95.904417ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-620455] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (77.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-620455 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-620455 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m17.214514131s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-620455 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (77.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (26.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-620455 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-620455 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (24.452051401s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-620455 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-620455 status -o json: exit status 2 (228.389369ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-620455","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-620455
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-620455: (1.413457669s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (26.09s)

                                                
                                    
x
+
TestPause/serial/Start (79.51s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-617427 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-617427 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m19.512260012s)
--- PASS: TestPause/serial/Start (79.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (34.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-620455 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-620455 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (34.114423056s)
--- PASS: TestNoKubernetes/serial/Start (34.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22127-387918/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-620455 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-620455 "sudo systemctl is-active --quiet service kubelet": exit status 1 (174.663112ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-620455
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-620455: (1.28463566s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-620455 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-620455 --driver=kvm2  --container-runtime=crio: (19.708877538s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-620455 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-620455 "sudo systemctl is-active --quiet service kubelet": exit status 1 (165.199992ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (77.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.631490857 start -p stopped-upgrade-422744 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.631490857 start -p stopped-upgrade-422744 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (38.227393969s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.631490857 -p stopped-upgrade-422744 stop
E1213 10:31:47.776801  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.631490857 -p stopped-upgrade-422744 stop: (1.925781672s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-422744 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-422744 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (36.999676025s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (77.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-248819 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-248819 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (128.872644ms)

                                                
                                                
-- stdout --
	* [false-248819] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:32:13.246739  427317 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:32:13.246844  427317 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:32:13.246854  427317 out.go:374] Setting ErrFile to fd 2...
	I1213 10:32:13.246861  427317 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:32:13.247078  427317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
	I1213 10:32:13.247670  427317 out.go:368] Setting JSON to false
	I1213 10:32:13.248668  427317 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8082,"bootTime":1765613851,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 10:32:13.248733  427317 start.go:143] virtualization: kvm guest
	I1213 10:32:13.250801  427317 out.go:179] * [false-248819] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 10:32:13.252298  427317 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:32:13.252298  427317 notify.go:221] Checking for updates...
	I1213 10:32:13.253731  427317 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:32:13.255132  427317 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
	I1213 10:32:13.256451  427317 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
	I1213 10:32:13.257741  427317 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 10:32:13.259156  427317 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:32:13.261003  427317 config.go:182] Loaded profile config "pause-617427": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:32:13.261119  427317 config.go:182] Loaded profile config "running-upgrade-689860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 10:32:13.261209  427317 config.go:182] Loaded profile config "stopped-upgrade-422744": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 10:32:13.261317  427317 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:32:13.299859  427317 out.go:179] * Using the kvm2 driver based on user configuration
	I1213 10:32:13.301087  427317 start.go:309] selected driver: kvm2
	I1213 10:32:13.301110  427317 start.go:927] validating driver "kvm2" against <nil>
	I1213 10:32:13.301126  427317 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:32:13.303412  427317 out.go:203] 
	W1213 10:32:13.305044  427317 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1213 10:32:13.306347  427317 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-248819 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-248819

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-248819

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-248819

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-248819

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-248819

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-248819

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-248819

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-248819

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-248819

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-248819

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-248819

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-248819" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-248819" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 10:32:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.105:8443
name: pause-617427
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 10:30:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.174:8443
name: running-upgrade-689860
contexts:
- context:
cluster: pause-617427
extensions:
- extension:
last-update: Sat, 13 Dec 2025 10:32:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-617427
name: pause-617427
- context:
cluster: running-upgrade-689860
user: running-upgrade-689860
name: running-upgrade-689860
current-context: pause-617427
kind: Config
users:
- name: pause-617427
user:
client-certificate: /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/pause-617427/client.crt
client-key: /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/pause-617427/client.key
- name: running-upgrade-689860
user:
client-certificate: /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/running-upgrade-689860/client.crt
client-key: /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/running-upgrade-689860/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-248819

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-248819"

                                                
                                                
----------------------- debugLogs end: false-248819 [took: 5.246470418s] --------------------------------
helpers_test.go:176: Cleaning up "false-248819" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-248819
--- PASS: TestNetworkPlugins/group/false (5.58s)

                                                
                                    
x
+
TestISOImage/Setup (22.13s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-964680 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-964680 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.130975914s)
--- PASS: TestISOImage/Setup (22.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-422744
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-422744: (1.299036631s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.30s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-964680 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.23s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-964680 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.23s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.23s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-964680 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.23s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-964680 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-964680 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.22s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-964680 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-964680 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-964680 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-964680 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-964680 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.22s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-964680 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (90.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-396390 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1213 10:33:56.551602  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:34:00.892910  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-396390 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m30.926524678s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (90.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (88.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-483526 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-483526 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m28.685969615s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (88.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (74.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-855266 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-855266 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m14.589402502s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (74.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-396390 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [41db9c7f-9474-463f-99da-f5b9d6698e68] Pending
helpers_test.go:353: "busybox" [41db9c7f-9474-463f-99da-f5b9d6698e68] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [41db9c7f-9474-463f-99da-f5b9d6698e68] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004079006s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-396390 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-396390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-396390 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (74.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-396390 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-396390 --alsologtostderr -v=3: (1m14.506599849s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (74.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-483526 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [f1e5e5e1-134f-46ae-b6e2-830180e3ed2b] Pending
helpers_test.go:353: "busybox" [f1e5e5e1-134f-46ae-b6e2-830180e3ed2b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [f1e5e5e1-134f-46ae-b6e2-830180e3ed2b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004470275s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-483526 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-855266 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [68a10e16-b63b-4409-b35f-63275706ff63] Pending
helpers_test.go:353: "busybox" [68a10e16-b63b-4409-b35f-63275706ff63] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [68a10e16-b63b-4409-b35f-63275706ff63] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.009105361s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-855266 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-483526 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-483526 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (82.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-483526 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-483526 --alsologtostderr -v=3: (1m22.387068656s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (82.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-855266 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-855266 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (72.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-855266 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-855266 --alsologtostderr -v=3: (1m12.836038383s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (72.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-396390 -n old-k8s-version-396390
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-396390 -n old-k8s-version-396390: exit status 7 (65.316152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-396390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (43.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-396390 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1213 10:36:47.777054  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-396390 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (43.547261953s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-396390 -n old-k8s-version-396390
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (43.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-483526 -n no-preload-483526
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-483526 -n no-preload-483526: exit status 7 (78.928858ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-483526 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (56.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-483526 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-483526 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (55.781414344s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-483526 -n no-preload-483526
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (56.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-855266 -n embed-certs-855266
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-855266 -n embed-certs-855266: exit status 7 (70.006064ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-855266 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (59.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-855266 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-855266 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (59.497593087s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-855266 -n embed-certs-855266
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (59.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-wr6z5" [aaee9f81-6cca-41df-8644-60b40466d43d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-wr6z5" [aaee9f81-6cca-41df-8644-60b40466d43d] Running
E1213 10:37:37.814205  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.005212581s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-wr6z5" [aaee9f81-6cca-41df-8644-60b40466d43d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004684807s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-396390 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-396390 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-396390 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-396390 --alsologtostderr -v=1: (1.079328568s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-396390 -n old-k8s-version-396390
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-396390 -n old-k8s-version-396390: exit status 2 (256.199566ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-396390 -n old-k8s-version-396390
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-396390 -n old-k8s-version-396390: exit status 2 (277.078426ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-396390 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-396390 --alsologtostderr -v=1: (1.078555439s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-396390 -n old-k8s-version-396390
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-396390 -n old-k8s-version-396390
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-085687 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-085687 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (53.328466371s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-f8zlv" [bba98d8e-1968-437f-b53d-00cf71478696] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005235213s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-bb8gh" [46d22fb0-def0-435b-a53f-7336ba4f3a91] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-bb8gh" [46d22fb0-def0-435b-a53f-7336ba4f3a91] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.005744352s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-f8zlv" [bba98d8e-1968-437f-b53d-00cf71478696] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004285947s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-483526 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-483526 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-483526 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-483526 --alsologtostderr -v=1: (1.019110358s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-483526 -n no-preload-483526
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-483526 -n no-preload-483526: exit status 2 (257.643245ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-483526 -n no-preload-483526
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-483526 -n no-preload-483526: exit status 2 (238.46097ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-483526 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-483526 -n no-preload-483526
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-483526 -n no-preload-483526
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-bb8gh" [46d22fb0-def0-435b-a53f-7336ba4f3a91] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011518332s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-855266 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-344991 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-344991 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (41.47116701s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-855266 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-855266 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-855266 --alsologtostderr -v=1: (1.087348422s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-855266 -n embed-certs-855266
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-855266 -n embed-certs-855266: exit status 2 (247.689536ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-855266 -n embed-certs-855266
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-855266 -n embed-certs-855266: exit status 2 (236.213998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-855266 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-855266 -n embed-certs-855266
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-855266 -n embed-certs-855266
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (91.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-248819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-248819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m31.369016438s)
--- PASS: TestNetworkPlugins/group/auto/Start (91.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-085687 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [5dae0410-deeb-4490-a47a-a5c2e11c96d0] Pending
helpers_test.go:353: "busybox" [5dae0410-deeb-4490-a47a-a5c2e11c96d0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [5dae0410-deeb-4490-a47a-a5c2e11c96d0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004942679s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-085687 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-085687 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-085687 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.071334612s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-085687 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (81.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-085687 --alsologtostderr -v=3
E1213 10:38:56.551475  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-085687 --alsologtostderr -v=3: (1m21.126164056s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (81.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-344991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-344991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.157463151s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-344991 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-344991 --alsologtostderr -v=3: (7.052387625s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-344991 -n newest-cni-344991
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-344991 -n newest-cni-344991: exit status 7 (69.537495ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-344991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-344991 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-344991 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (31.542201515s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-344991 -n newest-cni-344991
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-344991 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-344991 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-344991 -n newest-cni-344991
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-344991 -n newest-cni-344991: exit status 2 (232.03343ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-344991 -n newest-cni-344991
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-344991 -n newest-cni-344991: exit status 2 (226.87609ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-344991 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-344991 -n newest-cni-344991
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-344991 -n newest-cni-344991
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (64.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-248819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-248819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m4.579448587s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (64.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-248819 "pgrep -a kubelet"
I1213 10:40:03.936175  391877 config.go:182] Loaded profile config "auto-248819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-248819 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-8529d" [1b231053-c73a-42bf-9c9f-b106a5fbd46b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-8529d" [1b231053-c73a-42bf-9c9f-b106a5fbd46b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.007581167s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-248819 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-248819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-248819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-085687 -n default-k8s-diff-port-085687
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-085687 -n default-k8s-diff-port-085687: exit status 7 (76.358117ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-085687 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-085687 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1213 10:40:16.831849  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/old-k8s-version-396390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:16.838307  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/old-k8s-version-396390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:16.849798  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/old-k8s-version-396390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:16.871467  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/old-k8s-version-396390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:16.912899  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/old-k8s-version-396390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:16.994421  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/old-k8s-version-396390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:17.156014  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/old-k8s-version-396390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:17.478263  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/old-k8s-version-396390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:18.120346  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/old-k8s-version-396390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:19.401821  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/old-k8s-version-396390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:21.963720  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/old-k8s-version-396390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:27.085174  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/old-k8s-version-396390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-085687 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (47.608991449s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-085687 -n default-k8s-diff-port-085687
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (78.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-248819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1213 10:40:37.326717  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/old-k8s-version-396390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:37.735074  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/no-preload-483526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:37.741608  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/no-preload-483526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:37.753151  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/no-preload-483526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:37.774656  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/no-preload-483526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:37.816564  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/no-preload-483526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:37.898218  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/no-preload-483526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:38.060626  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/no-preload-483526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:38.382257  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/no-preload-483526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:39.023589  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/no-preload-483526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:40.305800  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/no-preload-483526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:42.867986  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/no-preload-483526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:47.989700  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/no-preload-483526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-248819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m18.044294598s)
--- PASS: TestNetworkPlugins/group/flannel/Start (78.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-sbxs4" [51b5adef-48b1-4db0-9274-eaf6739d05ae] Running
E1213 10:40:57.808496  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/old-k8s-version-396390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:58.231559  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/no-preload-483526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005049718s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-248819 "pgrep -a kubelet"
I1213 10:41:00.651490  391877 config.go:182] Loaded profile config "kindnet-248819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-248819 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-7mpqh" [bbcf85c2-0d3a-429c-986b-551369b65b68] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-7mpqh" [bbcf85c2-0d3a-429c-986b-551369b65b68] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004457441s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-fxrws" [c4bbab63-5fac-42cd-8a09-a91000b95245] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-fxrws" [c4bbab63-5fac-42cd-8a09-a91000b95245] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.006699112s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-248819 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-248819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-248819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-fxrws" [c4bbab63-5fac-42cd-8a09-a91000b95245] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005013167s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-085687 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-085687 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-085687 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-085687 --alsologtostderr -v=1: (1.031618823s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-085687 -n default-k8s-diff-port-085687
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-085687 -n default-k8s-diff-port-085687: exit status 2 (269.378498ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-085687 -n default-k8s-diff-port-085687
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-085687 -n default-k8s-diff-port-085687: exit status 2 (277.675926ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-085687 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-085687 -n default-k8s-diff-port-085687
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-085687 -n default-k8s-diff-port-085687
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-248819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-248819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m21.385717229s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (73.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-248819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1213 10:41:30.855181  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:41:38.769850  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/old-k8s-version-396390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:41:47.776457  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-553391/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-248819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m13.883749484s)
--- PASS: TestNetworkPlugins/group/bridge/Start (73.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-ksk6l" [29078c76-8da2-43c4-9db7-b9ef1387d6d8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.224704136s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-248819 "pgrep -a kubelet"
I1213 10:41:54.483287  391877 config.go:182] Loaded profile config "flannel-248819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-248819 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-dd48p" [9efae058-2b2d-4053-82d0-3af3f89bf2d4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 10:41:59.634632  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:41:59.675526  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/no-preload-483526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-dd48p" [9efae058-2b2d-4053-82d0-3af3f89bf2d4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.00537108s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-248819 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-248819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-248819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (69.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-248819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1213 10:42:37.813215  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/functional-992282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-248819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m9.715869723s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (69.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-248819 "pgrep -a kubelet"
I1213 10:42:42.942628  391877 config.go:182] Loaded profile config "bridge-248819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-248819 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-fw56t" [0a1148bb-75a8-4b8d-b566-05fc20454e25] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-fw56t" [0a1148bb-75a8-4b8d-b566-05fc20454e25] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004636976s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-248819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-248819 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m14.266438639s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-248819 "pgrep -a kubelet"
I1213 10:42:50.353254  391877 config.go:182] Loaded profile config "enable-default-cni-248819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-248819 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-h8lg8" [4079f1ea-60f3-4ed4-a862-427afd94e414] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-h8lg8" [4079f1ea-60f3-4ed4-a862-427afd94e414] Running
E1213 10:43:00.692085  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/old-k8s-version-396390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004318025s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-248819 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-248819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-248819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-248819 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-248819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-248819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-964680 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.22s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-964680 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-964680 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-964680 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-964680 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.22s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-964680 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-964680 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.20s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.19s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-964680 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   iso_version: v1.37.0-1765481609-22101
iso_test.go:118:   kicbase_version: v0.0.48-1765275396-22083
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: 28bc9824e3c85d2e3519912c2810d5729ab9ce8c
--- PASS: TestISOImage/VersionJSON (0.19s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.2s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-964680 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-248819 "pgrep -a kubelet"
I1213 10:43:31.628471  391877 config.go:182] Loaded profile config "custom-flannel-248819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-248819 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-gwb2g" [62625243-1284-4dfd-8b6a-efe3e7eb2eb7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-gwb2g" [62625243-1284-4dfd-8b6a-efe3e7eb2eb7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004418807s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-248819 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-248819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-248819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-2qmrg" [7051115a-b2c9-42e4-ab20-658ba5dde146] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1213 10:44:04.052497  391877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/default-k8s-diff-port-085687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "calico-node-2qmrg" [7051115a-b2c9-42e4-ab20-658ba5dde146] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005181019s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-248819 "pgrep -a kubelet"
I1213 10:44:06.247097  391877 config.go:182] Loaded profile config "calico-248819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-248819 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-zfdlq" [5d98395b-b76f-48dd-a8e0-cb80c4c18133] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-zfdlq" [5d98395b-b76f-48dd-a8e0-cb80c4c18133] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003802743s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-248819 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-248819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-248819 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    

Test skip (52/431)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.31
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
152 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
153 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
154 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
155 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
156 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
157 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
158 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
159 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.02
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
364 TestStartStop/group/disable-driver-mounts 0.19
380 TestNetworkPlugins/group/kubenet 4.29
388 TestNetworkPlugins/group/cilium 4.28
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-246361 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-137009" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-137009
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-248819 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-248819

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-248819

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-248819

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-248819

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-248819

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-248819

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-248819

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-248819

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-248819

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-248819

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-248819

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-248819" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-248819" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 10:30:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.105:8443
name: pause-617427
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 10:30:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.174:8443
name: running-upgrade-689860
contexts:
- context:
cluster: pause-617427
extensions:
- extension:
last-update: Sat, 13 Dec 2025 10:30:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-617427
name: pause-617427
- context:
cluster: running-upgrade-689860
user: running-upgrade-689860
name: running-upgrade-689860
current-context: ""
kind: Config
users:
- name: pause-617427
user:
client-certificate: /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/pause-617427/client.crt
client-key: /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/pause-617427/client.key
- name: running-upgrade-689860
user:
client-certificate: /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/running-upgrade-689860/client.crt
client-key: /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/running-upgrade-689860/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-248819

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-248819"

                                                
                                                
----------------------- debugLogs end: kubenet-248819 [took: 4.082831776s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-248819" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-248819
--- SKIP: TestNetworkPlugins/group/kubenet (4.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-248819 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-248819

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-248819

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-248819

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-248819

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-248819

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-248819

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-248819

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-248819

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-248819

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-248819

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-248819

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-248819" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-248819

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-248819

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-248819

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-248819

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-248819" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-248819" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 10:32:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.105:8443
name: pause-617427
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 10:30:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.174:8443
name: running-upgrade-689860
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 10:32:16 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.209:8443
name: stopped-upgrade-422744
contexts:
- context:
cluster: pause-617427
extensions:
- extension:
last-update: Sat, 13 Dec 2025 10:32:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-617427
name: pause-617427
- context:
cluster: running-upgrade-689860
user: running-upgrade-689860
name: running-upgrade-689860
- context:
cluster: stopped-upgrade-422744
user: stopped-upgrade-422744
name: stopped-upgrade-422744
current-context: stopped-upgrade-422744
kind: Config
users:
- name: pause-617427
user:
client-certificate: /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/pause-617427/client.crt
client-key: /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/pause-617427/client.key
- name: running-upgrade-689860
user:
client-certificate: /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/running-upgrade-689860/client.crt
client-key: /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/running-upgrade-689860/client.key
- name: stopped-upgrade-422744
user:
client-certificate: /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/stopped-upgrade-422744/client.crt
client-key: /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/stopped-upgrade-422744/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-248819

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-248819" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248819"

                                                
                                                
----------------------- debugLogs end: cilium-248819 [took: 4.069511195s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-248819" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-248819
--- SKIP: TestNetworkPlugins/group/cilium (4.28s)

                                                
                                    
Copied to clipboard