Test Report: KVM_Linux_crio 22182

                    
                      d8910aedaf59f4b051fab9f3c680e262e7105014:2025-12-17:42820
                    
                

Test fail (15/431)

x
+
TestAddons/parallel/Ingress (156.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-102582 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-102582 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-102582 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [796dd940-7d71-4204-ae81-121379bea215] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [796dd940-7d71-4204-ae81-121379bea215] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004915546s
I1217 08:18:34.246201  897277 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-102582 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-102582 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.390172739s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-102582 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-102582 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-102582 -n addons-102582
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-102582 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-102582 logs -n 25: (1.155793622s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-077551                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-077551 │ jenkins │ v1.37.0 │ 17 Dec 25 08:15 UTC │ 17 Dec 25 08:15 UTC │
	│ start   │ --download-only -p binary-mirror-293670 --alsologtostderr --binary-mirror http://127.0.0.1:34967 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-293670 │ jenkins │ v1.37.0 │ 17 Dec 25 08:15 UTC │                     │
	│ delete  │ -p binary-mirror-293670                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-293670 │ jenkins │ v1.37.0 │ 17 Dec 25 08:15 UTC │ 17 Dec 25 08:15 UTC │
	│ addons  │ enable dashboard -p addons-102582                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:15 UTC │                     │
	│ addons  │ disable dashboard -p addons-102582                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:15 UTC │                     │
	│ start   │ -p addons-102582 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:15 UTC │ 17 Dec 25 08:17 UTC │
	│ addons  │ addons-102582 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:17 UTC │ 17 Dec 25 08:17 UTC │
	│ addons  │ addons-102582 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:17 UTC │ 17 Dec 25 08:18 UTC │
	│ addons  │ enable headlamp -p addons-102582 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:18 UTC │ 17 Dec 25 08:18 UTC │
	│ addons  │ addons-102582 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:18 UTC │ 17 Dec 25 08:18 UTC │
	│ ssh     │ addons-102582 ssh cat /opt/local-path-provisioner/pvc-ded9037b-0c48-4dd9-8dfc-ab5a0107bbd1_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:18 UTC │ 17 Dec 25 08:18 UTC │
	│ addons  │ addons-102582 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:18 UTC │ 17 Dec 25 08:18 UTC │
	│ addons  │ addons-102582 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:18 UTC │ 17 Dec 25 08:18 UTC │
	│ ip      │ addons-102582 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:18 UTC │ 17 Dec 25 08:18 UTC │
	│ addons  │ addons-102582 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:18 UTC │ 17 Dec 25 08:18 UTC │
	│ addons  │ addons-102582 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:18 UTC │ 17 Dec 25 08:18 UTC │
	│ addons  │ addons-102582 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:18 UTC │ 17 Dec 25 08:18 UTC │
	│ addons  │ addons-102582 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:18 UTC │ 17 Dec 25 08:18 UTC │
	│ ssh     │ addons-102582 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:18 UTC │                     │
	│ addons  │ addons-102582 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:18 UTC │ 17 Dec 25 08:18 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-102582                                                                                                                                                                                                                                                                                                                                                                                         │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:18 UTC │ 17 Dec 25 08:18 UTC │
	│ addons  │ addons-102582 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:18 UTC │ 17 Dec 25 08:18 UTC │
	│ addons  │ addons-102582 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:19 UTC │ 17 Dec 25 08:19 UTC │
	│ addons  │ addons-102582 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:19 UTC │ 17 Dec 25 08:19 UTC │
	│ ip      │ addons-102582 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-102582        │ jenkins │ v1.37.0 │ 17 Dec 25 08:20 UTC │ 17 Dec 25 08:20 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:15:40
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:15:40.594719  898101 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:15:40.594859  898101 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:15:40.594872  898101 out.go:374] Setting ErrFile to fd 2...
	I1217 08:15:40.594879  898101 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:15:40.595092  898101 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 08:15:40.595627  898101 out.go:368] Setting JSON to false
	I1217 08:15:40.596463  898101 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10687,"bootTime":1765948654,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:15:40.596538  898101 start.go:143] virtualization: kvm guest
	I1217 08:15:40.598637  898101 out.go:179] * [addons-102582] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:15:40.600432  898101 notify.go:221] Checking for updates...
	I1217 08:15:40.600489  898101 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:15:40.602058  898101 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:15:40.603578  898101 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	I1217 08:15:40.605037  898101 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 08:15:40.606465  898101 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:15:40.607652  898101 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:15:40.608873  898101 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:15:40.638412  898101 out.go:179] * Using the kvm2 driver based on user configuration
	I1217 08:15:40.639653  898101 start.go:309] selected driver: kvm2
	I1217 08:15:40.639671  898101 start.go:927] validating driver "kvm2" against <nil>
	I1217 08:15:40.639684  898101 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:15:40.640640  898101 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 08:15:40.640946  898101 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:15:40.640977  898101 cni.go:84] Creating CNI manager for ""
	I1217 08:15:40.641038  898101 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 08:15:40.641051  898101 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 08:15:40.641086  898101 start.go:353] cluster config:
	{Name:addons-102582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-102582 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1217 08:15:40.641208  898101 iso.go:125] acquiring lock: {Name:mk258687bf3be9c6817f84af5b9e08a4f47b5420 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 08:15:40.642568  898101 out.go:179] * Starting "addons-102582" primary control-plane node in "addons-102582" cluster
	I1217 08:15:40.643547  898101 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:15:40.643582  898101 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-893359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 08:15:40.643598  898101 cache.go:65] Caching tarball of preloaded images
	I1217 08:15:40.643676  898101 preload.go:238] Found /home/jenkins/minikube-integration/22182-893359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 08:15:40.643687  898101 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 08:15:40.644000  898101 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/config.json ...
	I1217 08:15:40.644025  898101 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/config.json: {Name:mk1918c307bd4390e64bfbdfa4863891697d2088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:15:40.644158  898101 start.go:360] acquireMachinesLock for addons-102582: {Name:mkdc91ccb2d66cdada71da88e972b4d333b7f63c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 08:15:40.644203  898101 start.go:364] duration metric: took 32.244µs to acquireMachinesLock for "addons-102582"
	I1217 08:15:40.644220  898101 start.go:93] Provisioning new machine with config: &{Name:addons-102582 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:addons-102582 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:15:40.644264  898101 start.go:125] createHost starting for "" (driver="kvm2")
	I1217 08:15:40.645780  898101 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1217 08:15:40.645953  898101 start.go:159] libmachine.API.Create for "addons-102582" (driver="kvm2")
	I1217 08:15:40.645983  898101 client.go:173] LocalClient.Create starting
	I1217 08:15:40.646066  898101 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem
	I1217 08:15:40.691244  898101 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/cert.pem
	I1217 08:15:40.714247  898101 main.go:143] libmachine: creating domain...
	I1217 08:15:40.714263  898101 main.go:143] libmachine: creating network...
	I1217 08:15:40.715745  898101 main.go:143] libmachine: found existing default network
	I1217 08:15:40.716053  898101 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 08:15:40.716723  898101 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d10740}
	I1217 08:15:40.716815  898101 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-102582</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 08:15:40.722679  898101 main.go:143] libmachine: creating private network mk-addons-102582 192.168.39.0/24...
	I1217 08:15:40.789051  898101 main.go:143] libmachine: private network mk-addons-102582 192.168.39.0/24 created
	I1217 08:15:40.789337  898101 main.go:143] libmachine: <network>
	  <name>mk-addons-102582</name>
	  <uuid>28711a87-2738-497f-8634-1245db1f6c2b</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:38:87:b4'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 08:15:40.789384  898101 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582 ...
	I1217 08:15:40.789416  898101 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22182-893359/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso
	I1217 08:15:40.789429  898101 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 08:15:40.789534  898101 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22182-893359/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22182-893359/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso...
	I1217 08:15:41.073576  898101 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa...
	I1217 08:15:41.075569  898101 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/addons-102582.rawdisk...
	I1217 08:15:41.075607  898101 main.go:143] libmachine: Writing magic tar header
	I1217 08:15:41.075661  898101 main.go:143] libmachine: Writing SSH key tar header
	I1217 08:15:41.075766  898101 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582 ...
	I1217 08:15:41.075851  898101 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582
	I1217 08:15:41.075888  898101 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582 (perms=drwx------)
	I1217 08:15:41.075908  898101 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22182-893359/.minikube/machines
	I1217 08:15:41.075926  898101 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22182-893359/.minikube/machines (perms=drwxr-xr-x)
	I1217 08:15:41.075946  898101 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 08:15:41.075958  898101 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22182-893359/.minikube (perms=drwxr-xr-x)
	I1217 08:15:41.075974  898101 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22182-893359
	I1217 08:15:41.075992  898101 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22182-893359 (perms=drwxrwxr-x)
	I1217 08:15:41.076009  898101 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1217 08:15:41.076035  898101 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1217 08:15:41.076048  898101 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1217 08:15:41.076063  898101 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1217 08:15:41.076080  898101 main.go:143] libmachine: checking permissions on dir: /home
	I1217 08:15:41.076093  898101 main.go:143] libmachine: skipping /home - not owner
	I1217 08:15:41.076101  898101 main.go:143] libmachine: defining domain...
	I1217 08:15:41.077372  898101 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-102582</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/addons-102582.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-102582'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1217 08:15:41.084870  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:b0:b8:e5 in network default
	I1217 08:15:41.085410  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:41.085424  898101 main.go:143] libmachine: starting domain...
	I1217 08:15:41.085428  898101 main.go:143] libmachine: ensuring networks are active...
	I1217 08:15:41.086035  898101 main.go:143] libmachine: Ensuring network default is active
	I1217 08:15:41.086325  898101 main.go:143] libmachine: Ensuring network mk-addons-102582 is active
	I1217 08:15:41.086822  898101 main.go:143] libmachine: getting domain XML...
	I1217 08:15:41.087760  898101 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-102582</name>
	  <uuid>6bca442e-bd30-4173-84e1-edaca2232929</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/addons-102582.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:27:0b:32'/>
	      <source network='mk-addons-102582'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:b0:b8:e5'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1217 08:15:42.460541  898101 main.go:143] libmachine: waiting for domain to start...
	I1217 08:15:42.461836  898101 main.go:143] libmachine: domain is now running
	I1217 08:15:42.461856  898101 main.go:143] libmachine: waiting for IP...
	I1217 08:15:42.462888  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:42.463399  898101 main.go:143] libmachine: no network interface addresses found for domain addons-102582 (source=lease)
	I1217 08:15:42.463414  898101 main.go:143] libmachine: trying to list again with source=arp
	I1217 08:15:42.463765  898101 main.go:143] libmachine: unable to find current IP address of domain addons-102582 in network mk-addons-102582 (interfaces detected: [])
	I1217 08:15:42.463814  898101 retry.go:31] will retry after 274.919817ms: waiting for domain to come up
	I1217 08:15:42.740799  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:42.741409  898101 main.go:143] libmachine: no network interface addresses found for domain addons-102582 (source=lease)
	I1217 08:15:42.741432  898101 main.go:143] libmachine: trying to list again with source=arp
	I1217 08:15:42.741807  898101 main.go:143] libmachine: unable to find current IP address of domain addons-102582 in network mk-addons-102582 (interfaces detected: [])
	I1217 08:15:42.741867  898101 retry.go:31] will retry after 320.081345ms: waiting for domain to come up
	I1217 08:15:43.063416  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:43.064037  898101 main.go:143] libmachine: no network interface addresses found for domain addons-102582 (source=lease)
	I1217 08:15:43.064063  898101 main.go:143] libmachine: trying to list again with source=arp
	I1217 08:15:43.064357  898101 main.go:143] libmachine: unable to find current IP address of domain addons-102582 in network mk-addons-102582 (interfaces detected: [])
	I1217 08:15:43.064393  898101 retry.go:31] will retry after 466.001134ms: waiting for domain to come up
	I1217 08:15:43.532185  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:43.532831  898101 main.go:143] libmachine: no network interface addresses found for domain addons-102582 (source=lease)
	I1217 08:15:43.532859  898101 main.go:143] libmachine: trying to list again with source=arp
	I1217 08:15:43.533122  898101 main.go:143] libmachine: unable to find current IP address of domain addons-102582 in network mk-addons-102582 (interfaces detected: [])
	I1217 08:15:43.533161  898101 retry.go:31] will retry after 450.326048ms: waiting for domain to come up
	I1217 08:15:43.985648  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:43.986146  898101 main.go:143] libmachine: no network interface addresses found for domain addons-102582 (source=lease)
	I1217 08:15:43.986162  898101 main.go:143] libmachine: trying to list again with source=arp
	I1217 08:15:43.986482  898101 main.go:143] libmachine: unable to find current IP address of domain addons-102582 in network mk-addons-102582 (interfaces detected: [])
	I1217 08:15:43.986532  898101 retry.go:31] will retry after 566.352088ms: waiting for domain to come up
	I1217 08:15:44.554377  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:44.554928  898101 main.go:143] libmachine: no network interface addresses found for domain addons-102582 (source=lease)
	I1217 08:15:44.554947  898101 main.go:143] libmachine: trying to list again with source=arp
	I1217 08:15:44.555237  898101 main.go:143] libmachine: unable to find current IP address of domain addons-102582 in network mk-addons-102582 (interfaces detected: [])
	I1217 08:15:44.555287  898101 retry.go:31] will retry after 905.087189ms: waiting for domain to come up
	I1217 08:15:45.461645  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:45.462113  898101 main.go:143] libmachine: no network interface addresses found for domain addons-102582 (source=lease)
	I1217 08:15:45.462126  898101 main.go:143] libmachine: trying to list again with source=arp
	I1217 08:15:45.462422  898101 main.go:143] libmachine: unable to find current IP address of domain addons-102582 in network mk-addons-102582 (interfaces detected: [])
	I1217 08:15:45.462464  898101 retry.go:31] will retry after 1.095212775s: waiting for domain to come up
	I1217 08:15:46.558954  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:46.559462  898101 main.go:143] libmachine: no network interface addresses found for domain addons-102582 (source=lease)
	I1217 08:15:46.559478  898101 main.go:143] libmachine: trying to list again with source=arp
	I1217 08:15:46.559789  898101 main.go:143] libmachine: unable to find current IP address of domain addons-102582 in network mk-addons-102582 (interfaces detected: [])
	I1217 08:15:46.559828  898101 retry.go:31] will retry after 1.215030769s: waiting for domain to come up
	I1217 08:15:47.776199  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:47.776761  898101 main.go:143] libmachine: no network interface addresses found for domain addons-102582 (source=lease)
	I1217 08:15:47.776780  898101 main.go:143] libmachine: trying to list again with source=arp
	I1217 08:15:47.777114  898101 main.go:143] libmachine: unable to find current IP address of domain addons-102582 in network mk-addons-102582 (interfaces detected: [])
	I1217 08:15:47.777155  898101 retry.go:31] will retry after 1.145838275s: waiting for domain to come up
	I1217 08:15:48.924479  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:48.925173  898101 main.go:143] libmachine: no network interface addresses found for domain addons-102582 (source=lease)
	I1217 08:15:48.925191  898101 main.go:143] libmachine: trying to list again with source=arp
	I1217 08:15:48.925516  898101 main.go:143] libmachine: unable to find current IP address of domain addons-102582 in network mk-addons-102582 (interfaces detected: [])
	I1217 08:15:48.925554  898101 retry.go:31] will retry after 1.888378462s: waiting for domain to come up
	I1217 08:15:50.815805  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:50.816340  898101 main.go:143] libmachine: no network interface addresses found for domain addons-102582 (source=lease)
	I1217 08:15:50.816353  898101 main.go:143] libmachine: trying to list again with source=arp
	I1217 08:15:50.816836  898101 main.go:143] libmachine: unable to find current IP address of domain addons-102582 in network mk-addons-102582 (interfaces detected: [])
	I1217 08:15:50.816895  898101 retry.go:31] will retry after 2.820107496s: waiting for domain to come up
	I1217 08:15:53.641154  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:53.641808  898101 main.go:143] libmachine: no network interface addresses found for domain addons-102582 (source=lease)
	I1217 08:15:53.641832  898101 main.go:143] libmachine: trying to list again with source=arp
	I1217 08:15:53.642237  898101 main.go:143] libmachine: unable to find current IP address of domain addons-102582 in network mk-addons-102582 (interfaces detected: [])
	I1217 08:15:53.642290  898101 retry.go:31] will retry after 2.771962005s: waiting for domain to come up
	I1217 08:15:56.415788  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:56.416264  898101 main.go:143] libmachine: domain addons-102582 has current primary IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:56.416276  898101 main.go:143] libmachine: found domain IP: 192.168.39.110
	I1217 08:15:56.416282  898101 main.go:143] libmachine: reserving static IP address...
	I1217 08:15:56.416596  898101 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-102582", mac: "52:54:00:27:0b:32", ip: "192.168.39.110"} in network mk-addons-102582
	I1217 08:15:56.685272  898101 main.go:143] libmachine: reserved static IP address 192.168.39.110 for domain addons-102582
	I1217 08:15:56.685296  898101 main.go:143] libmachine: waiting for SSH...
	I1217 08:15:56.685304  898101 main.go:143] libmachine: Getting to WaitForSSH function...
	I1217 08:15:56.688708  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:56.689206  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:27:0b:32}
	I1217 08:15:56.689241  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:56.689464  898101 main.go:143] libmachine: Using SSH client type: native
	I1217 08:15:56.689587  898101 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1217 08:15:56.689597  898101 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1217 08:15:56.792142  898101 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 08:15:56.792500  898101 main.go:143] libmachine: domain creation complete
	I1217 08:15:56.794126  898101 machine.go:94] provisionDockerMachine start ...
	I1217 08:15:56.796391  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:56.796749  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:27:0b:32}
	I1217 08:15:56.796774  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:56.796930  898101 main.go:143] libmachine: Using SSH client type: native
	I1217 08:15:56.797022  898101 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1217 08:15:56.797031  898101 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 08:15:56.899400  898101 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1217 08:15:56.899463  898101 buildroot.go:166] provisioning hostname "addons-102582"
	I1217 08:15:56.902733  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:56.903176  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:27:0b:32}
	I1217 08:15:56.903211  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:56.903406  898101 main.go:143] libmachine: Using SSH client type: native
	I1217 08:15:56.903490  898101 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1217 08:15:56.903500  898101 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-102582 && echo "addons-102582" | sudo tee /etc/hostname
	I1217 08:15:57.028465  898101 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-102582
	
	I1217 08:15:57.031206  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.031604  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:15:57.031645  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.031796  898101 main.go:143] libmachine: Using SSH client type: native
	I1217 08:15:57.031895  898101 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1217 08:15:57.031918  898101 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-102582' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-102582/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-102582' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 08:15:57.142609  898101 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 08:15:57.142646  898101 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22182-893359/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-893359/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-893359/.minikube}
	I1217 08:15:57.142676  898101 buildroot.go:174] setting up certificates
	I1217 08:15:57.142688  898101 provision.go:84] configureAuth start
	I1217 08:15:57.145437  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.145900  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:15:57.145932  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.148449  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.148855  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:15:57.148885  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.149063  898101 provision.go:143] copyHostCerts
	I1217 08:15:57.149154  898101 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-893359/.minikube/cert.pem (1123 bytes)
	I1217 08:15:57.149282  898101 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-893359/.minikube/key.pem (1675 bytes)
	I1217 08:15:57.149343  898101 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-893359/.minikube/ca.pem (1078 bytes)
	I1217 08:15:57.149402  898101 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-893359/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca-key.pem org=jenkins.addons-102582 san=[127.0.0.1 192.168.39.110 addons-102582 localhost minikube]
	I1217 08:15:57.186122  898101 provision.go:177] copyRemoteCerts
	I1217 08:15:57.186173  898101 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 08:15:57.188816  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.189209  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:15:57.189238  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.189412  898101 sshutil.go:56] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa Username:docker}
	I1217 08:15:57.269951  898101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 08:15:57.299192  898101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 08:15:57.327594  898101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 08:15:57.356481  898101 provision.go:87] duration metric: took 213.778077ms to configureAuth
	I1217 08:15:57.356527  898101 buildroot.go:189] setting minikube options for container-runtime
	I1217 08:15:57.356759  898101 config.go:182] Loaded profile config "addons-102582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:15:57.359645  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.360018  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:15:57.360040  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.360188  898101 main.go:143] libmachine: Using SSH client type: native
	I1217 08:15:57.360294  898101 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1217 08:15:57.360324  898101 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 08:15:57.592466  898101 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 08:15:57.592497  898101 machine.go:97] duration metric: took 798.346418ms to provisionDockerMachine
	I1217 08:15:57.592553  898101 client.go:176] duration metric: took 16.946541583s to LocalClient.Create
	I1217 08:15:57.592578  898101 start.go:167] duration metric: took 16.946623435s to libmachine.API.Create "addons-102582"
	I1217 08:15:57.592592  898101 start.go:293] postStartSetup for "addons-102582" (driver="kvm2")
	I1217 08:15:57.592607  898101 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 08:15:57.592693  898101 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 08:15:57.595487  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.595905  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:15:57.595934  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.596080  898101 sshutil.go:56] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa Username:docker}
	I1217 08:15:57.676598  898101 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 08:15:57.681338  898101 info.go:137] Remote host: Buildroot 2025.02
	I1217 08:15:57.681360  898101 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-893359/.minikube/addons for local assets ...
	I1217 08:15:57.681416  898101 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-893359/.minikube/files for local assets ...
	I1217 08:15:57.681439  898101 start.go:296] duration metric: took 88.840396ms for postStartSetup
	I1217 08:15:57.684052  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.684426  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:15:57.684454  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.684684  898101 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/config.json ...
	I1217 08:15:57.684840  898101 start.go:128] duration metric: took 17.040561303s to createHost
	I1217 08:15:57.686756  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.687101  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:15:57.687120  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.687255  898101 main.go:143] libmachine: Using SSH client type: native
	I1217 08:15:57.687334  898101 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1217 08:15:57.687343  898101 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 08:15:57.788767  898101 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765959357.747620551
	
	I1217 08:15:57.788807  898101 fix.go:216] guest clock: 1765959357.747620551
	I1217 08:15:57.788814  898101 fix.go:229] Guest: 2025-12-17 08:15:57.747620551 +0000 UTC Remote: 2025-12-17 08:15:57.684849945 +0000 UTC m=+17.139780250 (delta=62.770606ms)
	I1217 08:15:57.788833  898101 fix.go:200] guest clock delta is within tolerance: 62.770606ms
	I1217 08:15:57.788838  898101 start.go:83] releasing machines lock for "addons-102582", held for 17.144626299s
	I1217 08:15:57.791533  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.792016  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:15:57.792045  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.792674  898101 ssh_runner.go:195] Run: cat /version.json
	I1217 08:15:57.792825  898101 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 08:15:57.795841  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.796197  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.796300  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:15:57.796333  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.796526  898101 sshutil.go:56] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa Username:docker}
	I1217 08:15:57.796697  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:15:57.796742  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:57.796958  898101 sshutil.go:56] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa Username:docker}
	I1217 08:15:57.873575  898101 ssh_runner.go:195] Run: systemctl --version
	I1217 08:15:57.899926  898101 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 08:15:58.063660  898101 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 08:15:58.071178  898101 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 08:15:58.071254  898101 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 08:15:58.098604  898101 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 08:15:58.098658  898101 start.go:496] detecting cgroup driver to use...
	I1217 08:15:58.098739  898101 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 08:15:58.123041  898101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 08:15:58.139148  898101 docker.go:218] disabling cri-docker service (if available) ...
	I1217 08:15:58.139201  898101 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 08:15:58.156102  898101 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 08:15:58.172721  898101 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 08:15:58.316468  898101 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 08:15:58.529546  898101 docker.go:234] disabling docker service ...
	I1217 08:15:58.529616  898101 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 08:15:58.545844  898101 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 08:15:58.560369  898101 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 08:15:58.712769  898101 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 08:15:58.854662  898101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 08:15:58.870623  898101 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 08:15:58.892795  898101 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 08:15:58.892871  898101 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:15:58.905159  898101 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 08:15:58.905236  898101 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:15:58.918596  898101 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:15:58.931046  898101 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:15:58.943287  898101 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 08:15:58.956242  898101 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:15:58.967739  898101 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:15:58.987215  898101 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 08:15:59.000406  898101 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 08:15:59.029640  898101 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1217 08:15:59.029698  898101 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1217 08:15:59.052943  898101 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 08:15:59.068164  898101 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:15:59.208439  898101 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 08:15:59.304476  898101 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 08:15:59.304595  898101 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 08:15:59.310185  898101 start.go:564] Will wait 60s for crictl version
	I1217 08:15:59.310302  898101 ssh_runner.go:195] Run: which crictl
	I1217 08:15:59.314658  898101 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 08:15:59.348443  898101 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 08:15:59.348563  898101 ssh_runner.go:195] Run: crio --version
	I1217 08:15:59.376457  898101 ssh_runner.go:195] Run: crio --version
	I1217 08:15:59.409452  898101 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1217 08:15:59.413399  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:59.413859  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:15:59.413891  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:15:59.414052  898101 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1217 08:15:59.418373  898101 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:15:59.433446  898101 kubeadm.go:884] updating cluster {Name:addons-102582 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
3 ClusterName:addons-102582 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 08:15:59.433672  898101 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 08:15:59.433734  898101 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:15:59.464137  898101 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1217 08:15:59.464215  898101 ssh_runner.go:195] Run: which lz4
	I1217 08:15:59.468429  898101 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1217 08:15:59.472921  898101 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1217 08:15:59.472947  898101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340314847 bytes)
	I1217 08:16:00.676452  898101 crio.go:462] duration metric: took 1.208047408s to copy over tarball
	I1217 08:16:00.676585  898101 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1217 08:16:02.176227  898101 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.499601643s)
	I1217 08:16:02.176264  898101 crio.go:469] duration metric: took 1.499771891s to extract the tarball
	I1217 08:16:02.176277  898101 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1217 08:16:02.213392  898101 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 08:16:02.253022  898101 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 08:16:02.253058  898101 cache_images.go:86] Images are preloaded, skipping loading
	I1217 08:16:02.253075  898101 kubeadm.go:935] updating node { 192.168.39.110 8443 v1.34.3 crio true true} ...
	I1217 08:16:02.253243  898101 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-102582 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-102582 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 08:16:02.253354  898101 ssh_runner.go:195] Run: crio config
	I1217 08:16:02.303168  898101 cni.go:84] Creating CNI manager for ""
	I1217 08:16:02.303198  898101 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 08:16:02.303227  898101 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 08:16:02.303260  898101 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-102582 NodeName:addons-102582 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 08:16:02.303447  898101 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-102582"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.110"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 08:16:02.303548  898101 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 08:16:02.316849  898101 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 08:16:02.316960  898101 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 08:16:02.329293  898101 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1217 08:16:02.351147  898101 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 08:16:02.373176  898101 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1217 08:16:02.401768  898101 ssh_runner.go:195] Run: grep 192.168.39.110	control-plane.minikube.internal$ /etc/hosts
	I1217 08:16:02.406393  898101 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.110	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 08:16:02.422003  898101 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 08:16:02.574955  898101 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:16:02.597917  898101 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582 for IP: 192.168.39.110
	I1217 08:16:02.597959  898101 certs.go:195] generating shared ca certs ...
	I1217 08:16:02.597984  898101 certs.go:227] acquiring lock for ca certs: {Name:mk9975fd3c0c6324a63f90fa6e20c46f3034e6ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:16:02.598142  898101 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-893359/.minikube/ca.key
	I1217 08:16:02.755445  898101 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-893359/.minikube/ca.crt ...
	I1217 08:16:02.755487  898101 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-893359/.minikube/ca.crt: {Name:mk67cda5c11a6e307a5cdcf4ae0d1890d7a63f51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:16:02.755677  898101 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-893359/.minikube/ca.key ...
	I1217 08:16:02.755693  898101 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-893359/.minikube/ca.key: {Name:mkeab869f44bbf7f24d75988c4061646efaa4c65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:16:02.755778  898101 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-893359/.minikube/proxy-client-ca.key
	I1217 08:16:03.078118  898101 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-893359/.minikube/proxy-client-ca.crt ...
	I1217 08:16:03.078150  898101 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-893359/.minikube/proxy-client-ca.crt: {Name:mka70ae4d256e51ffe0c4325eb1d577fba9b52c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:16:03.078316  898101 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-893359/.minikube/proxy-client-ca.key ...
	I1217 08:16:03.078329  898101 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-893359/.minikube/proxy-client-ca.key: {Name:mk0206accd97b89990cc0754e476a3e9d9112c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:16:03.078402  898101 certs.go:257] generating profile certs ...
	I1217 08:16:03.078502  898101 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.key
	I1217 08:16:03.078542  898101 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt with IP's: []
	I1217 08:16:03.231495  898101 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt ...
	I1217 08:16:03.231532  898101 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: {Name:mk8a56e4855fcea20b3b0bdeee0322d566f4b9a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:16:03.231696  898101 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.key ...
	I1217 08:16:03.231707  898101 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.key: {Name:mk627366167d74bd60bdc06894d61116e4663aee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:16:03.231777  898101 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/apiserver.key.e4d1d4da
	I1217 08:16:03.231796  898101 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/apiserver.crt.e4d1d4da with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110]
	I1217 08:16:03.288399  898101 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/apiserver.crt.e4d1d4da ...
	I1217 08:16:03.288428  898101 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/apiserver.crt.e4d1d4da: {Name:mk88f0b76767f708c6574ed774483dcbf91ae616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:16:03.288594  898101 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/apiserver.key.e4d1d4da ...
	I1217 08:16:03.288608  898101 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/apiserver.key.e4d1d4da: {Name:mk7de479fb04b3dfcfe230947fe3e1d99b1257b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:16:03.288693  898101 certs.go:382] copying /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/apiserver.crt.e4d1d4da -> /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/apiserver.crt
	I1217 08:16:03.288769  898101 certs.go:386] copying /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/apiserver.key.e4d1d4da -> /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/apiserver.key
	I1217 08:16:03.288820  898101 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/proxy-client.key
	I1217 08:16:03.288838  898101 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/proxy-client.crt with IP's: []
	I1217 08:16:03.303790  898101 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/proxy-client.crt ...
	I1217 08:16:03.303815  898101 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/proxy-client.crt: {Name:mke9ab2d4bc2bde6bc817e9f13b9d03b8632656c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:16:03.303960  898101 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/proxy-client.key ...
	I1217 08:16:03.303974  898101 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/proxy-client.key: {Name:mk33026a65023eeb7698627420a51cb5d1df0713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:16:03.304198  898101 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 08:16:03.304239  898101 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem (1078 bytes)
	I1217 08:16:03.304269  898101 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/cert.pem (1123 bytes)
	I1217 08:16:03.304293  898101 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/key.pem (1675 bytes)
	I1217 08:16:03.304911  898101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 08:16:03.343609  898101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 08:16:03.377731  898101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 08:16:03.407436  898101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 08:16:03.435899  898101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 08:16:03.464419  898101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 08:16:03.492570  898101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 08:16:03.521291  898101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 08:16:03.549327  898101 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 08:16:03.579093  898101 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 08:16:03.599519  898101 ssh_runner.go:195] Run: openssl version
	I1217 08:16:03.606228  898101 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:16:03.617766  898101 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 08:16:03.629482  898101 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:16:03.635009  898101 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 08:16 /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:16:03.635084  898101 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 08:16:03.642475  898101 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 08:16:03.653396  898101 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 08:16:03.664590  898101 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 08:16:03.669562  898101 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 08:16:03.669615  898101 kubeadm.go:401] StartCluster: {Name:addons-102582 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 C
lusterName:addons-102582 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:16:03.669712  898101 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 08:16:03.669802  898101 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 08:16:03.708680  898101 cri.go:89] found id: ""
	I1217 08:16:03.708752  898101 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 08:16:03.720764  898101 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 08:16:03.732246  898101 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 08:16:03.743380  898101 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 08:16:03.743412  898101 kubeadm.go:158] found existing configuration files:
	
	I1217 08:16:03.743472  898101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 08:16:03.753972  898101 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 08:16:03.754041  898101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 08:16:03.765163  898101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 08:16:03.776016  898101 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 08:16:03.776081  898101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 08:16:03.787615  898101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 08:16:03.799819  898101 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 08:16:03.799885  898101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 08:16:03.811260  898101 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 08:16:03.822074  898101 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 08:16:03.822184  898101 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 08:16:03.834340  898101 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1217 08:16:03.886595  898101 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 08:16:03.886662  898101 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 08:16:03.988536  898101 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 08:16:03.988700  898101 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 08:16:03.988849  898101 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 08:16:04.001576  898101 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 08:16:04.096314  898101 out.go:252]   - Generating certificates and keys ...
	I1217 08:16:04.096434  898101 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 08:16:04.096551  898101 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 08:16:04.258087  898101 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 08:16:04.469709  898101 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 08:16:04.701294  898101 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 08:16:04.785764  898101 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 08:16:04.983246  898101 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 08:16:04.983392  898101 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-102582 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I1217 08:16:05.094368  898101 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 08:16:05.094619  898101 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-102582 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I1217 08:16:05.439657  898101 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 08:16:05.634903  898101 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 08:16:05.882410  898101 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 08:16:05.882479  898101 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 08:16:05.915364  898101 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 08:16:06.225739  898101 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 08:16:06.455583  898101 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 08:16:06.619382  898101 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 08:16:06.864344  898101 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 08:16:06.864455  898101 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 08:16:06.866441  898101 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 08:16:06.868203  898101 out.go:252]   - Booting up control plane ...
	I1217 08:16:06.868325  898101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 08:16:06.868438  898101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 08:16:06.869586  898101 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 08:16:06.888866  898101 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 08:16:06.889009  898101 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 08:16:06.895878  898101 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 08:16:06.896248  898101 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 08:16:06.896291  898101 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 08:16:07.073837  898101 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 08:16:07.074017  898101 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 08:16:08.574196  898101 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.502380932s
	I1217 08:16:08.576856  898101 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 08:16:08.576972  898101 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.110:8443/livez
	I1217 08:16:08.577148  898101 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 08:16:08.577299  898101 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 08:16:10.079883  898101 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.504382909s
	I1217 08:16:11.921293  898101 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.347705575s
	I1217 08:16:14.077347  898101 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.505492814s
	I1217 08:16:14.098141  898101 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 08:16:14.111493  898101 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 08:16:14.122256  898101 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 08:16:14.122435  898101 kubeadm.go:319] [mark-control-plane] Marking the node addons-102582 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 08:16:14.136295  898101 kubeadm.go:319] [bootstrap-token] Using token: vrvupm.8tvycfiknexh5wu7
	I1217 08:16:14.137608  898101 out.go:252]   - Configuring RBAC rules ...
	I1217 08:16:14.137759  898101 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 08:16:14.143097  898101 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 08:16:14.148906  898101 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 08:16:14.152463  898101 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 08:16:14.155733  898101 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 08:16:14.161725  898101 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 08:16:14.484619  898101 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 08:16:14.920969  898101 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 08:16:15.483681  898101 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 08:16:15.484679  898101 kubeadm.go:319] 
	I1217 08:16:15.484759  898101 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 08:16:15.484770  898101 kubeadm.go:319] 
	I1217 08:16:15.484842  898101 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 08:16:15.484863  898101 kubeadm.go:319] 
	I1217 08:16:15.484916  898101 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 08:16:15.484983  898101 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 08:16:15.485029  898101 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 08:16:15.485036  898101 kubeadm.go:319] 
	I1217 08:16:15.485085  898101 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 08:16:15.485097  898101 kubeadm.go:319] 
	I1217 08:16:15.485170  898101 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 08:16:15.485180  898101 kubeadm.go:319] 
	I1217 08:16:15.485254  898101 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 08:16:15.485374  898101 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 08:16:15.485484  898101 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 08:16:15.485494  898101 kubeadm.go:319] 
	I1217 08:16:15.485663  898101 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 08:16:15.485800  898101 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 08:16:15.485814  898101 kubeadm.go:319] 
	I1217 08:16:15.485886  898101 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token vrvupm.8tvycfiknexh5wu7 \
	I1217 08:16:15.486049  898101 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c3947ba636666199dd8b3e2c01ec34729b53edad6b7ad13a07443be717b10ef3 \
	I1217 08:16:15.486095  898101 kubeadm.go:319] 	--control-plane 
	I1217 08:16:15.486107  898101 kubeadm.go:319] 
	I1217 08:16:15.486295  898101 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 08:16:15.486310  898101 kubeadm.go:319] 
	I1217 08:16:15.486410  898101 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token vrvupm.8tvycfiknexh5wu7 \
	I1217 08:16:15.486573  898101 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c3947ba636666199dd8b3e2c01ec34729b53edad6b7ad13a07443be717b10ef3 
	I1217 08:16:15.487699  898101 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 08:16:15.487732  898101 cni.go:84] Creating CNI manager for ""
	I1217 08:16:15.487744  898101 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 08:16:15.489265  898101 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1217 08:16:15.490407  898101 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1217 08:16:15.504953  898101 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1217 08:16:15.530443  898101 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 08:16:15.530601  898101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-102582 minikube.k8s.io/updated_at=2025_12_17T08_16_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144 minikube.k8s.io/name=addons-102582 minikube.k8s.io/primary=true
	I1217 08:16:15.530607  898101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:16:15.656029  898101 ops.go:34] apiserver oom_adj: -16
	I1217 08:16:15.656124  898101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:16:16.157195  898101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:16:16.656950  898101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:16:17.157135  898101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:16:17.656973  898101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:16:18.156478  898101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:16:18.656276  898101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:16:19.156359  898101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:16:19.656298  898101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:16:20.157177  898101 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 08:16:20.233459  898101 kubeadm.go:1114] duration metric: took 4.702955577s to wait for elevateKubeSystemPrivileges
	I1217 08:16:20.233532  898101 kubeadm.go:403] duration metric: took 16.563920806s to StartCluster
	I1217 08:16:20.233561  898101 settings.go:142] acquiring lock: {Name:mk00e9c64ab8ac6f70bd45684fd03a06bf70934d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:16:20.233714  898101 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-893359/kubeconfig
	I1217 08:16:20.234220  898101 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-893359/kubeconfig: {Name:mk96c1c47bbd55cd0ea3fb74224ea198e9d4fd5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 08:16:20.234470  898101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 08:16:20.234472  898101 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 08:16:20.234565  898101 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1217 08:16:20.234721  898101 addons.go:70] Setting yakd=true in profile "addons-102582"
	I1217 08:16:20.234757  898101 addons.go:239] Setting addon yakd=true in "addons-102582"
	I1217 08:16:20.234767  898101 addons.go:70] Setting inspektor-gadget=true in profile "addons-102582"
	I1217 08:16:20.234782  898101 config.go:182] Loaded profile config "addons-102582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:16:20.234792  898101 host.go:66] Checking if "addons-102582" exists ...
	I1217 08:16:20.234800  898101 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-102582"
	I1217 08:16:20.234814  898101 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-102582"
	I1217 08:16:20.234833  898101 addons.go:70] Setting cloud-spanner=true in profile "addons-102582"
	I1217 08:16:20.234849  898101 addons.go:239] Setting addon cloud-spanner=true in "addons-102582"
	I1217 08:16:20.234849  898101 addons.go:70] Setting default-storageclass=true in profile "addons-102582"
	I1217 08:16:20.234867  898101 host.go:66] Checking if "addons-102582" exists ...
	I1217 08:16:20.234881  898101 addons.go:70] Setting ingress=true in profile "addons-102582"
	I1217 08:16:20.234893  898101 host.go:66] Checking if "addons-102582" exists ...
	I1217 08:16:20.234907  898101 addons.go:239] Setting addon ingress=true in "addons-102582"
	I1217 08:16:20.234908  898101 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-102582"
	I1217 08:16:20.234919  898101 addons.go:70] Setting registry-creds=true in profile "addons-102582"
	I1217 08:16:20.234980  898101 addons.go:70] Setting storage-provisioner=true in profile "addons-102582"
	I1217 08:16:20.234995  898101 addons.go:70] Setting ingress-dns=true in profile "addons-102582"
	I1217 08:16:20.235006  898101 addons.go:239] Setting addon ingress-dns=true in "addons-102582"
	I1217 08:16:20.235007  898101 addons.go:70] Setting volcano=true in profile "addons-102582"
	I1217 08:16:20.235018  898101 addons.go:239] Setting addon volcano=true in "addons-102582"
	I1217 08:16:20.235036  898101 host.go:66] Checking if "addons-102582" exists ...
	I1217 08:16:20.234958  898101 host.go:66] Checking if "addons-102582" exists ...
	I1217 08:16:20.235192  898101 addons.go:70] Setting volumesnapshots=true in profile "addons-102582"
	I1217 08:16:20.235211  898101 addons.go:70] Setting metrics-server=true in profile "addons-102582"
	I1217 08:16:20.235216  898101 addons.go:239] Setting addon volumesnapshots=true in "addons-102582"
	I1217 08:16:20.235226  898101 addons.go:239] Setting addon metrics-server=true in "addons-102582"
	I1217 08:16:20.235246  898101 host.go:66] Checking if "addons-102582" exists ...
	I1217 08:16:20.235253  898101 host.go:66] Checking if "addons-102582" exists ...
	I1217 08:16:20.235433  898101 addons.go:70] Setting registry=true in profile "addons-102582"
	I1217 08:16:20.235458  898101 addons.go:239] Setting addon registry=true in "addons-102582"
	I1217 08:16:20.235489  898101 host.go:66] Checking if "addons-102582" exists ...
	I1217 08:16:20.234793  898101 addons.go:239] Setting addon inspektor-gadget=true in "addons-102582"
	I1217 08:16:20.235915  898101 host.go:66] Checking if "addons-102582" exists ...
	I1217 08:16:20.234894  898101 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-102582"
	I1217 08:16:20.234998  898101 addons.go:239] Setting addon storage-provisioner=true in "addons-102582"
	I1217 08:16:20.235037  898101 host.go:66] Checking if "addons-102582" exists ...
	I1217 08:16:20.234987  898101 addons.go:239] Setting addon registry-creds=true in "addons-102582"
	I1217 08:16:20.236487  898101 host.go:66] Checking if "addons-102582" exists ...
	I1217 08:16:20.234972  898101 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-102582"
	I1217 08:16:20.234969  898101 addons.go:70] Setting gcp-auth=true in profile "addons-102582"
	I1217 08:16:20.236334  898101 host.go:66] Checking if "addons-102582" exists ...
	I1217 08:16:20.235201  898101 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-102582"
	I1217 08:16:20.236639  898101 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-102582"
	I1217 08:16:20.236674  898101 host.go:66] Checking if "addons-102582" exists ...
	I1217 08:16:20.236705  898101 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-102582"
	I1217 08:16:20.234960  898101 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-102582"
	I1217 08:16:20.236965  898101 mustload.go:66] Loading cluster: addons-102582
	I1217 08:16:20.236985  898101 host.go:66] Checking if "addons-102582" exists ...
	I1217 08:16:20.237147  898101 config.go:182] Loaded profile config "addons-102582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:16:20.237323  898101 out.go:179] * Verifying Kubernetes components...
	I1217 08:16:20.238718  898101 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W1217 08:16:20.241618  898101 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1217 08:16:20.242954  898101 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1217 08:16:20.242971  898101 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1217 08:16:20.243014  898101 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1217 08:16:20.242985  898101 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1217 08:16:20.243074  898101 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1217 08:16:20.242958  898101 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1217 08:16:20.245176  898101 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 08:16:20.245470  898101 host.go:66] Checking if "addons-102582" exists ...
	I1217 08:16:20.245199  898101 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1217 08:16:20.245661  898101 addons.go:239] Setting addon default-storageclass=true in "addons-102582"
	I1217 08:16:20.245231  898101 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1217 08:16:20.245702  898101 host.go:66] Checking if "addons-102582" exists ...
	I1217 08:16:20.245712  898101 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1217 08:16:20.245747  898101 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-102582"
	I1217 08:16:20.245234  898101 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1217 08:16:20.245805  898101 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1217 08:16:20.245784  898101 host.go:66] Checking if "addons-102582" exists ...
	I1217 08:16:20.246009  898101 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1217 08:16:20.246016  898101 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1217 08:16:20.246883  898101 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1217 08:16:20.246024  898101 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1217 08:16:20.247461  898101 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1217 08:16:20.246030  898101 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1217 08:16:20.246052  898101 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1217 08:16:20.246055  898101 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 08:16:20.246075  898101 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 08:16:20.246939  898101 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1217 08:16:20.247831  898101 out.go:179]   - Using image docker.io/registry:3.0.0
	I1217 08:16:20.247893  898101 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 08:16:20.248284  898101 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1217 08:16:20.247911  898101 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1217 08:16:20.248568  898101 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1217 08:16:20.248658  898101 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1217 08:16:20.248677  898101 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1217 08:16:20.248690  898101 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 08:16:20.248701  898101 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1217 08:16:20.249318  898101 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 08:16:20.249336  898101 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1217 08:16:20.249369  898101 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1217 08:16:20.249381  898101 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1217 08:16:20.249536  898101 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:16:20.249551  898101 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 08:16:20.250069  898101 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 08:16:20.250086  898101 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 08:16:20.250384  898101 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1217 08:16:20.250500  898101 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 08:16:20.251749  898101 out.go:179]   - Using image docker.io/busybox:stable
	I1217 08:16:20.251983  898101 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 08:16:20.252000  898101 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1217 08:16:20.252798  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.253203  898101 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1217 08:16:20.253999  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.254337  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:16:20.254371  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.254373  898101 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1217 08:16:20.255258  898101 sshutil.go:56] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa Username:docker}
	I1217 08:16:20.255335  898101 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1217 08:16:20.255385  898101 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 08:16:20.255407  898101 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1217 08:16:20.255496  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:16:20.255546  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.255865  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.256614  898101 sshutil.go:56] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa Username:docker}
	I1217 08:16:20.257050  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.257448  898101 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1217 08:16:20.257998  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:16:20.258036  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.258615  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:16:20.258615  898101 sshutil.go:56] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa Username:docker}
	I1217 08:16:20.258648  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.259383  898101 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1217 08:16:20.259748  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.259755  898101 sshutil.go:56] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa Username:docker}
	I1217 08:16:20.260059  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.260661  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:16:20.260700  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.260998  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.261142  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:16:20.261204  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.261257  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.261292  898101 sshutil.go:56] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa Username:docker}
	I1217 08:16:20.261637  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.261779  898101 sshutil.go:56] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa Username:docker}
	I1217 08:16:20.262004  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.262060  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.262189  898101 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1217 08:16:20.262416  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:16:20.262453  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.262519  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.262672  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:16:20.262704  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.262961  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:16:20.262971  898101 sshutil.go:56] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa Username:docker}
	I1217 08:16:20.262999  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.263184  898101 sshutil.go:56] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa Username:docker}
	I1217 08:16:20.263243  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:16:20.263280  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.263341  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.263642  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:16:20.263663  898101 sshutil.go:56] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa Username:docker}
	I1217 08:16:20.263683  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.263751  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:16:20.263787  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.263983  898101 sshutil.go:56] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa Username:docker}
	I1217 08:16:20.264278  898101 sshutil.go:56] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa Username:docker}
	I1217 08:16:20.264294  898101 sshutil.go:56] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa Username:docker}
	I1217 08:16:20.264653  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:16:20.264686  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.264690  898101 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1217 08:16:20.264872  898101 sshutil.go:56] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa Username:docker}
	I1217 08:16:20.265631  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.266038  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:16:20.266066  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.266250  898101 sshutil.go:56] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa Username:docker}
	I1217 08:16:20.268622  898101 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1217 08:16:20.268638  898101 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1217 08:16:20.271184  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.271613  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:16:20.271642  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:20.271831  898101 sshutil.go:56] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa Username:docker}
	I1217 08:16:21.062968  898101 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 08:16:21.184581  898101 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 08:16:21.184692  898101 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 08:16:21.194638  898101 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 08:16:21.266763  898101 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 08:16:21.284623  898101 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 08:16:21.352889  898101 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1217 08:16:21.372013  898101 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1217 08:16:21.372046  898101 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1217 08:16:21.380056  898101 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1217 08:16:21.380082  898101 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1217 08:16:21.385688  898101 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1217 08:16:21.385712  898101 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1217 08:16:21.465254  898101 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 08:16:21.501679  898101 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1217 08:16:21.507229  898101 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 08:16:21.665796  898101 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1217 08:16:21.665829  898101 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1217 08:16:21.676328  898101 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 08:16:21.768304  898101 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1217 08:16:21.768338  898101 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1217 08:16:21.773894  898101 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 08:16:21.932313  898101 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1217 08:16:21.932363  898101 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1217 08:16:21.963161  898101 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1217 08:16:21.963194  898101 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1217 08:16:22.085083  898101 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1217 08:16:22.085113  898101 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1217 08:16:22.230139  898101 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1217 08:16:22.230166  898101 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1217 08:16:22.268641  898101 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1217 08:16:22.268670  898101 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1217 08:16:22.328143  898101 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 08:16:22.328171  898101 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1217 08:16:22.377424  898101 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1217 08:16:22.377451  898101 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1217 08:16:22.423171  898101 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1217 08:16:22.423199  898101 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1217 08:16:22.510289  898101 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1217 08:16:22.612196  898101 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1217 08:16:22.612221  898101 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1217 08:16:22.627742  898101 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 08:16:22.631447  898101 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1217 08:16:22.631469  898101 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1217 08:16:22.775478  898101 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1217 08:16:22.775520  898101 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1217 08:16:22.974679  898101 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1217 08:16:23.006366  898101 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1217 08:16:23.006397  898101 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1217 08:16:23.037056  898101 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.974045077s)
	I1217 08:16:23.166326  898101 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 08:16:23.166352  898101 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1217 08:16:23.365664  898101 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1217 08:16:23.365693  898101 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1217 08:16:23.532368  898101 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1217 08:16:23.532407  898101 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1217 08:16:23.589377  898101 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 08:16:23.804797  898101 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.620172948s)
	I1217 08:16:23.804836  898101 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.620104661s)
	I1217 08:16:23.804885  898101 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1217 08:16:23.805692  898101 node_ready.go:35] waiting up to 6m0s for node "addons-102582" to be "Ready" ...
	I1217 08:16:23.811401  898101 node_ready.go:49] node "addons-102582" is "Ready"
	I1217 08:16:23.811427  898101 node_ready.go:38] duration metric: took 5.71049ms for node "addons-102582" to be "Ready" ...
	I1217 08:16:23.811445  898101 api_server.go:52] waiting for apiserver process to appear ...
	I1217 08:16:23.811499  898101 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:16:23.831231  898101 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1217 08:16:23.831252  898101 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1217 08:16:24.312195  898101 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-102582" context rescaled to 1 replicas
	I1217 08:16:24.335555  898101 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1217 08:16:24.335579  898101 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1217 08:16:24.768666  898101 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1217 08:16:24.768696  898101 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1217 08:16:25.205781  898101 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 08:16:25.205822  898101 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1217 08:16:25.534180  898101 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 08:16:25.589594  898101 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.39491856s)
	I1217 08:16:27.258875  898101 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.992065359s)
	I1217 08:16:27.258885  898101 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.974214678s)
	I1217 08:16:27.259014  898101 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.906074824s)
	I1217 08:16:27.259123  898101 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.793835614s)
	I1217 08:16:27.259271  898101 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.757551556s)
	I1217 08:16:27.259329  898101 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.752075761s)
	I1217 08:16:27.259363  898101 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.583008196s)
	I1217 08:16:27.694075  898101 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1217 08:16:27.696851  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:27.697331  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:16:27.697361  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:27.697548  898101 sshutil.go:56] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa Username:docker}
	I1217 08:16:28.184701  898101 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1217 08:16:28.553696  898101 addons.go:239] Setting addon gcp-auth=true in "addons-102582"
	I1217 08:16:28.553768  898101 host.go:66] Checking if "addons-102582" exists ...
	I1217 08:16:28.556000  898101 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1217 08:16:28.558786  898101 main.go:143] libmachine: domain addons-102582 has defined MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:28.559225  898101 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:0b:32", ip: ""} in network mk-addons-102582: {Iface:virbr1 ExpiryTime:2025-12-17 09:15:55 +0000 UTC Type:0 Mac:52:54:00:27:0b:32 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-102582 Clientid:01:52:54:00:27:0b:32}
	I1217 08:16:28.559255  898101 main.go:143] libmachine: domain addons-102582 has defined IP address 192.168.39.110 and MAC address 52:54:00:27:0b:32 in network mk-addons-102582
	I1217 08:16:28.559421  898101 sshutil.go:56] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/addons-102582/id_rsa Username:docker}
	I1217 08:16:28.583291  898101 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.072966072s)
	I1217 08:16:28.583339  898101 addons.go:495] Verifying addon registry=true in "addons-102582"
	I1217 08:16:28.583476  898101 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.608767361s)
	I1217 08:16:28.583427  898101 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.955618722s)
	I1217 08:16:28.583576  898101 addons.go:495] Verifying addon metrics-server=true in "addons-102582"
	I1217 08:16:28.583438  898101 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.809511901s)
	I1217 08:16:28.583607  898101 addons.go:495] Verifying addon ingress=true in "addons-102582"
	I1217 08:16:28.587651  898101 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-102582 service yakd-dashboard -n yakd-dashboard
	
	I1217 08:16:28.587657  898101 out.go:179] * Verifying registry addon...
	I1217 08:16:28.588557  898101 out.go:179] * Verifying ingress addon...
	I1217 08:16:28.590439  898101 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1217 08:16:28.590446  898101 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1217 08:16:28.622762  898101 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1217 08:16:28.622782  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:28.629262  898101 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 08:16:28.629290  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:29.112933  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:29.113190  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:29.324321  898101 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.7348783s)
	I1217 08:16:29.324384  898101 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.512838785s)
	W1217 08:16:29.324394  898101 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 08:16:29.324417  898101 api_server.go:72] duration metric: took 9.089918308s to wait for apiserver process to appear ...
	I1217 08:16:29.324425  898101 api_server.go:88] waiting for apiserver healthz status ...
	I1217 08:16:29.324424  898101 retry.go:31] will retry after 221.699512ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 08:16:29.324449  898101 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I1217 08:16:29.381091  898101 api_server.go:279] https://192.168.39.110:8443/healthz returned 200:
	ok
	I1217 08:16:29.388709  898101 api_server.go:141] control plane version: v1.34.3
	I1217 08:16:29.388757  898101 api_server.go:131] duration metric: took 64.321309ms to wait for apiserver health ...
	I1217 08:16:29.388772  898101 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 08:16:29.448472  898101 system_pods.go:59] 17 kube-system pods found
	I1217 08:16:29.448523  898101 system_pods.go:61] "amd-gpu-device-plugin-2ncbq" [4450da79-d912-4714-9520-5bb210b71e93] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 08:16:29.448529  898101 system_pods.go:61] "coredns-66bc5c9577-hxktp" [95165d3d-53c3-4a1e-95b7-ad3c6b4c67c1] Running
	I1217 08:16:29.448536  898101 system_pods.go:61] "coredns-66bc5c9577-r5vph" [ae9727d5-0f6a-453c-a750-7c839f49a7f3] Running
	I1217 08:16:29.448540  898101 system_pods.go:61] "etcd-addons-102582" [f0d5c7ab-e51f-4ffd-a333-a223a472b950] Running
	I1217 08:16:29.448543  898101 system_pods.go:61] "kube-apiserver-addons-102582" [641c7176-95b0-4f48-82cf-d07a12b0426f] Running
	I1217 08:16:29.448547  898101 system_pods.go:61] "kube-controller-manager-addons-102582" [749d1e7b-ca27-4a0d-bf93-12a44066c126] Running
	I1217 08:16:29.448552  898101 system_pods.go:61] "kube-ingress-dns-minikube" [7a0c2344-6104-411e-8897-7546cbed0000] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 08:16:29.448555  898101 system_pods.go:61] "kube-proxy-cvpvx" [4e6b4991-9f62-4b87-9016-9a20ced841d4] Running
	I1217 08:16:29.448558  898101 system_pods.go:61] "kube-scheduler-addons-102582" [622e7ade-b4f3-49c5-bd1a-256ac34fae7d] Running
	I1217 08:16:29.448563  898101 system_pods.go:61] "metrics-server-85b7d694d7-pldbl" [ce43b5f2-7eab-4146-ba0f-023fd611c7ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 08:16:29.448572  898101 system_pods.go:61] "nvidia-device-plugin-daemonset-n49qb" [1415b907-fbf3-403f-b62c-ae1fe98ef8d1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 08:16:29.448577  898101 system_pods.go:61] "registry-6b586f9694-zcqnn" [346d82ea-f7c0-41b3-b452-62f34e93ba28] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 08:16:29.448584  898101 system_pods.go:61] "registry-creds-764b6fb674-k98dk" [923dc278-a867-452f-9383-6bf7b7e956ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 08:16:29.448590  898101 system_pods.go:61] "registry-proxy-5h8sx" [92a36d32-ba2f-41a8-9981-9a149c8411c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 08:16:29.448596  898101 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9cv8h" [260b1d9b-4d41-478e-8f18-2a175053cf75] Pending
	I1217 08:16:29.448600  898101 system_pods.go:61] "snapshot-controller-7d9fbc56b8-x8vvj" [c9698e00-355a-475b-b474-b814858b5c11] Pending
	I1217 08:16:29.448605  898101 system_pods.go:61] "storage-provisioner" [2fa9a663-856d-45f5-a8ac-24b302c5850b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:16:29.448614  898101 system_pods.go:74] duration metric: took 59.833528ms to wait for pod list to return data ...
	I1217 08:16:29.448633  898101 default_sa.go:34] waiting for default service account to be created ...
	I1217 08:16:29.487566  898101 default_sa.go:45] found service account: "default"
	I1217 08:16:29.487596  898101 default_sa.go:55] duration metric: took 38.955694ms for default service account to be created ...
	I1217 08:16:29.487607  898101 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 08:16:29.541905  898101 system_pods.go:86] 17 kube-system pods found
	I1217 08:16:29.541941  898101 system_pods.go:89] "amd-gpu-device-plugin-2ncbq" [4450da79-d912-4714-9520-5bb210b71e93] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 08:16:29.541948  898101 system_pods.go:89] "coredns-66bc5c9577-hxktp" [95165d3d-53c3-4a1e-95b7-ad3c6b4c67c1] Running
	I1217 08:16:29.541954  898101 system_pods.go:89] "coredns-66bc5c9577-r5vph" [ae9727d5-0f6a-453c-a750-7c839f49a7f3] Running
	I1217 08:16:29.541959  898101 system_pods.go:89] "etcd-addons-102582" [f0d5c7ab-e51f-4ffd-a333-a223a472b950] Running
	I1217 08:16:29.541963  898101 system_pods.go:89] "kube-apiserver-addons-102582" [641c7176-95b0-4f48-82cf-d07a12b0426f] Running
	I1217 08:16:29.541966  898101 system_pods.go:89] "kube-controller-manager-addons-102582" [749d1e7b-ca27-4a0d-bf93-12a44066c126] Running
	I1217 08:16:29.541971  898101 system_pods.go:89] "kube-ingress-dns-minikube" [7a0c2344-6104-411e-8897-7546cbed0000] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 08:16:29.541974  898101 system_pods.go:89] "kube-proxy-cvpvx" [4e6b4991-9f62-4b87-9016-9a20ced841d4] Running
	I1217 08:16:29.541980  898101 system_pods.go:89] "kube-scheduler-addons-102582" [622e7ade-b4f3-49c5-bd1a-256ac34fae7d] Running
	I1217 08:16:29.541985  898101 system_pods.go:89] "metrics-server-85b7d694d7-pldbl" [ce43b5f2-7eab-4146-ba0f-023fd611c7ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 08:16:29.541990  898101 system_pods.go:89] "nvidia-device-plugin-daemonset-n49qb" [1415b907-fbf3-403f-b62c-ae1fe98ef8d1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 08:16:29.541995  898101 system_pods.go:89] "registry-6b586f9694-zcqnn" [346d82ea-f7c0-41b3-b452-62f34e93ba28] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 08:16:29.542000  898101 system_pods.go:89] "registry-creds-764b6fb674-k98dk" [923dc278-a867-452f-9383-6bf7b7e956ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 08:16:29.542007  898101 system_pods.go:89] "registry-proxy-5h8sx" [92a36d32-ba2f-41a8-9981-9a149c8411c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 08:16:29.542012  898101 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9cv8h" [260b1d9b-4d41-478e-8f18-2a175053cf75] Pending
	I1217 08:16:29.542050  898101 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x8vvj" [c9698e00-355a-475b-b474-b814858b5c11] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 08:16:29.542061  898101 system_pods.go:89] "storage-provisioner" [2fa9a663-856d-45f5-a8ac-24b302c5850b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 08:16:29.542069  898101 system_pods.go:126] duration metric: took 54.456686ms to wait for k8s-apps to be running ...
	I1217 08:16:29.542078  898101 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 08:16:29.542127  898101 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:16:29.546802  898101 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 08:16:29.618775  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:29.623808  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:30.108863  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:30.121179  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:30.563454  898101 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.029220452s)
	I1217 08:16:30.563524  898101 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-102582"
	I1217 08:16:30.563546  898101 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.007510979s)
	I1217 08:16:30.563621  898101 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.021467574s)
	I1217 08:16:30.563708  898101 system_svc.go:56] duration metric: took 1.021624923s WaitForService to wait for kubelet
	I1217 08:16:30.563723  898101 kubeadm.go:587] duration metric: took 10.329224208s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 08:16:30.563744  898101 node_conditions.go:102] verifying NodePressure condition ...
	I1217 08:16:30.566175  898101 out.go:179] * Verifying csi-hostpath-driver addon...
	I1217 08:16:30.566171  898101 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 08:16:30.567747  898101 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1217 08:16:30.568365  898101 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1217 08:16:30.568962  898101 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1217 08:16:30.568982  898101 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1217 08:16:30.609611  898101 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 08:16:30.609645  898101 node_conditions.go:123] node cpu capacity is 2
	I1217 08:16:30.609663  898101 node_conditions.go:105] duration metric: took 45.912982ms to run NodePressure ...
	I1217 08:16:30.609675  898101 start.go:242] waiting for startup goroutines ...
	I1217 08:16:30.640439  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:30.644226  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:30.644336  898101 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 08:16:30.644353  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:30.714463  898101 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1217 08:16:30.714489  898101 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1217 08:16:30.868108  898101 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 08:16:30.868133  898101 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1217 08:16:30.914735  898101 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 08:16:31.082046  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:31.099206  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:31.100319  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:31.575847  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:31.596784  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:31.597701  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:31.647016  898101 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.100160751s)
	I1217 08:16:32.018866  898101 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.104074201s)
	I1217 08:16:32.020024  898101 addons.go:495] Verifying addon gcp-auth=true in "addons-102582"
	I1217 08:16:32.021542  898101 out.go:179] * Verifying gcp-auth addon...
	I1217 08:16:32.023676  898101 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1217 08:16:32.057114  898101 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1217 08:16:32.057139  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:32.095802  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:32.097864  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:32.100640  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:32.531067  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:32.573596  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:32.596747  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:32.596866  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:33.030648  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:33.074784  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:33.102254  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:33.102392  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:33.529273  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:33.574804  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:33.597407  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:33.598712  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:34.028538  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:34.072795  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:34.096494  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:34.102578  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:34.527542  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:34.572988  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:34.628676  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:34.628810  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:35.029537  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:35.074627  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:35.098100  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:35.098852  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:35.528707  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:35.576077  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:35.594701  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:35.597042  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:36.028655  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:36.073139  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:36.129293  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:36.129397  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:36.528809  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:36.572735  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:36.594764  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:36.595404  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:37.031342  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:37.073494  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:37.097134  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:37.098487  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:37.529782  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:37.573135  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:37.596553  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:37.598458  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:38.028579  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:38.072170  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:38.095942  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:38.096173  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:38.530478  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:38.575049  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:38.594925  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:38.596235  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:39.028144  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:39.135071  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:39.135943  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:39.136379  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:39.528340  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:39.575049  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:39.601004  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:39.601134  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:40.031376  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:40.075275  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:40.099011  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:40.100727  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:40.527185  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:40.574531  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:40.598276  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:40.600069  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:41.031393  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:41.458987  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:41.461144  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:41.462661  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:41.527451  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:41.573063  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:41.595267  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:41.596631  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:42.027074  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:42.072663  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:42.094701  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:42.094915  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:42.528124  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:42.572193  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:42.594826  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:42.595024  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:43.033927  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:43.072558  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:43.099405  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:43.100061  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:43.529999  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:43.576625  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:43.595657  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:43.599574  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:44.030975  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:44.072451  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:44.098091  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:44.101247  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:44.531097  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:44.576749  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:44.593302  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:44.595631  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:45.028242  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:45.073862  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:45.098211  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:45.100167  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:45.532003  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:45.573657  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:45.596404  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:45.596741  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:46.383364  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:46.383480  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:46.383627  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:46.383725  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:46.527690  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:46.573024  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:46.597488  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:46.597588  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:47.028241  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:47.074783  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:47.099802  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:47.099803  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:47.530232  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:47.573320  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:47.599052  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:47.599461  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:48.027962  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:48.073234  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:48.097582  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:48.100482  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:48.528539  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:48.572732  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:48.597285  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:48.599397  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:49.176653  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:49.176667  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:49.176757  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:49.176774  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:49.530088  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:49.574895  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:49.595880  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:49.596724  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:50.029090  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:50.074195  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:50.104106  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:50.105847  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:50.528578  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:50.572794  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:50.593934  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:50.596016  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:51.027394  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:51.076843  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:51.095015  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:51.096503  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:51.527082  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:51.574706  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:51.596154  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:51.596358  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:52.034687  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:52.072219  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:52.098034  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:52.099622  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:52.527137  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:52.572682  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:52.597696  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:52.597970  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:53.084074  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:53.085767  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:53.096687  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:53.098266  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:53.529093  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:53.573443  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:53.634116  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:53.634421  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:54.029261  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:54.074112  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:54.098571  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:54.099808  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:54.531568  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:54.573294  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:54.595178  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:54.598536  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:55.029593  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:55.074080  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:55.098699  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:55.099087  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:55.530598  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:55.573903  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:55.630918  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:55.631200  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:56.028155  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:56.072364  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:56.095387  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:56.096207  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 08:16:56.531408  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:56.572763  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:56.595284  898101 kapi.go:107] duration metric: took 28.004831567s to wait for kubernetes.io/minikube-addons=registry ...
	I1217 08:16:56.597316  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:57.030163  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:57.076635  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:57.097373  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:57.531536  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:57.572669  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:57.601241  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:58.031218  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:58.075482  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:58.100731  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:58.528063  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:58.573893  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:58.594349  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:59.029780  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:59.075573  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:59.095590  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:16:59.528014  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:16:59.572685  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:16:59.595445  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:00.028931  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:00.073824  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:00.132167  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:00.535433  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:00.578430  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:00.597165  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:01.031399  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:01.073218  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:01.098334  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:01.530993  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:01.574580  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:01.595269  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:02.030796  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:02.072451  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:02.101736  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:02.533315  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:02.573735  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:02.596239  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:03.034337  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:03.078603  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:03.101016  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:03.530637  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:03.575218  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:03.597623  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:04.040184  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:04.137770  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:04.139005  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:04.533286  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:04.632186  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:04.633386  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:05.028170  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:05.072732  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:05.097755  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:05.528108  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:05.576746  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:05.595731  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:06.049386  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:06.073321  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:06.100100  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:06.530829  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:06.577069  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:06.597965  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:07.028860  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:07.075104  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:07.096800  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:07.528011  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:07.572732  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:07.594573  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:08.030152  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:08.267569  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:08.268761  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:08.527358  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:08.575473  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:08.597412  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:09.027795  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:09.072197  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:09.094980  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:09.528911  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:09.572424  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:09.593925  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:10.028452  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:10.073388  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:10.096818  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:10.531601  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:10.630808  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:10.631826  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:11.030133  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:11.072369  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:11.098038  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:11.834907  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:11.835099  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:11.836418  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:12.030875  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:12.072072  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:12.100326  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:12.528602  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:12.571556  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:12.594173  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:13.028422  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:13.072961  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:13.096900  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:13.531392  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:13.573855  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:13.594465  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:14.029897  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:14.131579  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:14.132946  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:14.530054  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:14.574025  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:14.595441  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:15.030538  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:15.074728  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:15.100738  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:15.527084  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:15.572617  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:15.595708  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:16.027605  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:16.071924  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:16.095079  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:16.530657  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:16.572817  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:16.599552  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:17.036009  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:17.134359  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:17.135851  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:17.530226  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:17.630339  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:17.630474  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:18.032394  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:18.093962  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:18.109381  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:18.529076  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:18.575991  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:18.595086  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:19.029766  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:19.073038  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:19.100951  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:19.529937  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:19.574269  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:19.600376  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:20.030499  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:20.073736  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:20.099656  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:20.531501  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:20.632907  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:20.634462  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:21.026887  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:21.073162  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:21.097370  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:21.533285  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:21.631493  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:21.631675  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:22.028575  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:22.072585  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:22.097541  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:22.527111  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:22.574571  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:22.595375  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:23.031947  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:23.072757  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:23.093664  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:23.527914  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:23.571898  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:23.595788  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:24.031425  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:24.090001  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:24.103964  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:24.618356  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:24.618411  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:24.618746  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:25.028257  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:25.072598  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 08:17:25.094527  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:25.527479  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:25.572383  898101 kapi.go:107] duration metric: took 55.00401365s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1217 08:17:25.594207  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:26.028663  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:26.093549  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:26.527357  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:26.594354  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:27.028033  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:27.096354  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:27.528242  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:27.629393  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:28.027566  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:28.094151  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:28.527948  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:28.594210  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:29.027945  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:29.094477  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:29.528363  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:29.594688  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:30.027148  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:30.094368  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:30.526954  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:30.594662  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:31.029628  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:31.094020  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:31.528081  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:31.594769  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:32.027249  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:32.094319  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:32.535972  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:32.597791  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:33.028095  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:33.094445  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:33.530202  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:33.594889  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:34.030568  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:34.131431  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:34.533784  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:34.594713  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:35.030873  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:35.103640  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:35.527577  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:35.595630  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:36.030451  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:36.105140  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:36.531687  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:36.595065  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:37.031013  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:37.098776  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:37.534010  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:37.598489  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:38.027896  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:38.094440  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:38.530686  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:38.594144  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:39.029037  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:39.096708  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:39.529297  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:39.596196  898101 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 08:17:40.028812  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:40.097154  898101 kapi.go:107] duration metric: took 1m11.506712223s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1217 08:17:40.528144  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:41.031952  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:41.531265  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:42.028471  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:42.527907  898101 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 08:17:43.030998  898101 kapi.go:107] duration metric: took 1m11.007318363s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1217 08:17:43.032529  898101 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-102582 cluster.
	I1217 08:17:43.033881  898101 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1217 08:17:43.035200  898101 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1217 08:17:43.036364  898101 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, nvidia-device-plugin, inspektor-gadget, ingress-dns, cloud-spanner, registry-creds, amd-gpu-device-plugin, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1217 08:17:43.037408  898101 addons.go:530] duration metric: took 1m22.802850597s for enable addons: enabled=[default-storageclass storage-provisioner nvidia-device-plugin inspektor-gadget ingress-dns cloud-spanner registry-creds amd-gpu-device-plugin storage-provisioner-rancher metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1217 08:17:43.037459  898101 start.go:247] waiting for cluster config update ...
	I1217 08:17:43.037492  898101 start.go:256] writing updated cluster config ...
	I1217 08:17:43.037798  898101 ssh_runner.go:195] Run: rm -f paused
	I1217 08:17:43.047962  898101 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:17:43.130969  898101 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r5vph" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:17:43.136736  898101 pod_ready.go:94] pod "coredns-66bc5c9577-r5vph" is "Ready"
	I1217 08:17:43.136764  898101 pod_ready.go:86] duration metric: took 5.765003ms for pod "coredns-66bc5c9577-r5vph" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:17:43.138871  898101 pod_ready.go:83] waiting for pod "etcd-addons-102582" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:17:43.142650  898101 pod_ready.go:94] pod "etcd-addons-102582" is "Ready"
	I1217 08:17:43.142671  898101 pod_ready.go:86] duration metric: took 3.78054ms for pod "etcd-addons-102582" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:17:43.144860  898101 pod_ready.go:83] waiting for pod "kube-apiserver-addons-102582" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:17:43.149501  898101 pod_ready.go:94] pod "kube-apiserver-addons-102582" is "Ready"
	I1217 08:17:43.149543  898101 pod_ready.go:86] duration metric: took 4.662489ms for pod "kube-apiserver-addons-102582" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:17:43.152033  898101 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-102582" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:17:43.453935  898101 pod_ready.go:94] pod "kube-controller-manager-addons-102582" is "Ready"
	I1217 08:17:43.453970  898101 pod_ready.go:86] duration metric: took 301.917944ms for pod "kube-controller-manager-addons-102582" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:17:43.653884  898101 pod_ready.go:83] waiting for pod "kube-proxy-cvpvx" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:17:44.052628  898101 pod_ready.go:94] pod "kube-proxy-cvpvx" is "Ready"
	I1217 08:17:44.052670  898101 pod_ready.go:86] duration metric: took 398.752057ms for pod "kube-proxy-cvpvx" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:17:44.253080  898101 pod_ready.go:83] waiting for pod "kube-scheduler-addons-102582" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:17:44.652788  898101 pod_ready.go:94] pod "kube-scheduler-addons-102582" is "Ready"
	I1217 08:17:44.652816  898101 pod_ready.go:86] duration metric: took 399.702061ms for pod "kube-scheduler-addons-102582" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 08:17:44.652827  898101 pod_ready.go:40] duration metric: took 1.60482716s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 08:17:44.702326  898101 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 08:17:44.704148  898101 out.go:179] * Done! kubectl is now configured to use "addons-102582" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.857445019Z" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io" file="tlsclientconfig/tlsclientconfig.go:20"
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.857483632Z" level=debug msg="GET https://registry-1.docker.io/v2/" file="docker/docker_client.go:631"
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.874683501Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d493d9d8-1181-4990-bb0f-ce387fabf68d name=/runtime.v1.RuntimeService/Version
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.874936670Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d493d9d8-1181-4990-bb0f-ce387fabf68d name=/runtime.v1.RuntimeService/Version
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.876802630Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a5b48f0-aea3-4cfc-a54c-3f411aacda33 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.878698321Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765959649878671331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551113,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a5b48f0-aea3-4cfc-a54c-3f411aacda33 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.879819364Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f867df0-ee11-4706-889b-84308f111892 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.879927437Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f867df0-ee11-4706-889b-84308f111892 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.880222116Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf569d2865ae1a63f220e13043bfd45b458b3183bbf796ee69bd7d9462292203,PodSandboxId:7b84df0244124b3793a6bead7b86653774a3b074f3abb7b4bd13da2d5016b410,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765959507727275266,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 796dd940-7d71-4204-ae81-121379bea215,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:027b8091346c2f0ad613404b5c65bb43b1bbaec8bd52e2e852ed39b124c8b716,PodSandboxId:f09fe312eaee5a2970d528ea75116a119655ff8fb98ffaad0434def80d1be2cb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765959467154198292,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ba9b1e5-27b4-431a-8877-f939828de8e0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ccc7164eb6fea6caa560ad22347d2ccbd8e08dd7379d5592d4ac7668d93dac9,PodSandboxId:e51fc110cdddceb21ef00a9ed14948ab193ec5c842a767bc2fa3a2fb5f82b647,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765959459681791201,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-6mm56,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 93cad92c-6b1a-49e3-9a49-7ff7de55dd3b,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:449a3639941b0fc2ba5cc45502515436eaad72c449280302dacea13ff2d03455,PodSandboxId:a6ba8702e661594c804deb9d8845b6905ae0576429794ddc211ae42dfbdfe905,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765959441815009667,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5l9hl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fe23a914-1b79-48ca-97ee-972fe3826f3c,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fea2969cab6aa4a88e41e740497d6ab0b591dbe27bc75d3d08b403cd19b2b3d7,PodSandboxId:43552eb4d293efb9b7120ce7fb2b94a17381c03e28f8d5b6712d9a7056606f4b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765959428658472108,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9kfwn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 586053c8-eefb-4751-94ae-019563b00b73,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ef79e35a7e6d332577e9139d384414835625d1b308a627289b07325ba98f1d,PodSandboxId:95e4bf24ddd99eaf9cc69d9b5a3f1f14165c5ad9474516beefb351c1bde18a79,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765959410676100265,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0c2344-6104-411e-8897-7546cbed0000,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e15c4ac6d2cd529e4aae590f2dbaba5c223aa94d6e8ad2209b058a6066a7db9b,PodSandboxId:8cd07ffaa4853a601338fff4f8fd722c21ad0a43aa3cb4509978b94f20e99a7c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765959397813221089,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2ncbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4450da79-d912-4714-9520-5bb210b71e93,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e649dca23d592d2d8593c371a115469cf04e57d7f3966505a0128d1f86d111,PodSandboxId:55c56d43188aac11738007edd491c2531b2595da0b27f3323b63285208da6f4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765959389176766303,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fa9a663-856d-45f5-a8ac-24b302c5850b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b4130ecb4458b98757876551cc183214a6574f44f0975b520e3ed28296f319,PodSandboxId:5cc643adf3eac7dfb7471cec5b88d8e0f2012f617357a806bb1c79349c84c785,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765959381561677563,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-r5vph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9727d5-0f6a-453c-a750-7c839f49a7f3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd5764d72d08733b06af4fb200f737e06edd08eda4a1997abb76c1c3740a8bc,PodSandboxId:2af2d2f16a4fe8a2ef7ebe3f1eabf3164777484c23438e7c95c80a93171ca255,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765959380630285398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cvpvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e6b4991-9f62-4b87-9016-9a20ced841d4,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083799c842945f40d47253da936788b9111ee64723491e10dffcd33a1965da53,PodSandboxId:04a76fe4048511a78934b2af44f43918e79acd9fb979cdb7a4e4994956b7abc7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765959368966682217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-102582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e6cc4bb220c85258ecd6fa48bc3e2e,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85a8663bb7785064dad39fe626a60a22eb5397025a7a29fe476730166000863,PodSandboxId:6cdf352d3192eaa295f520bdcddeb211be722662fc2ee72fa4ef0351c3346c34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765959368977298280,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-102582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23df8f88f439ef48b3fd25e852446273,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:645fc121be9292d194f36a33d0605b544a1b414210811ecce4686280c49d0acd,PodSandboxId:fc2e573ec8844116bb51be05d8ea7b8b3fedcf226bfc18ccb257434ff552e2fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765959368981573513,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-102582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1425dde2833e1ca56b06c2818fe93d25,},Annotations:map[string]string{io.kubernete
s.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a79370e7844d56d81460604102e695e343b11cfbfd85800f09d320449b336e65,PodSandboxId:b802330ce9d74438bd2fc1ac0410a3fe4ef69c7075887b1f03a3a0b19c1d0339,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765959368923025085,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-102582,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 76eb23482bc644de1eb1bbacb5c84312,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f867df0-ee11-4706-889b-84308f111892 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.912517491Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f65bb4c-80f9-41e7-abcd-49b4487ad2ad name=/runtime.v1.RuntimeService/Version
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.912618195Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f65bb4c-80f9-41e7-abcd-49b4487ad2ad name=/runtime.v1.RuntimeService/Version
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.914718040Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4af9e69c-fbb9-460c-b4e6-42f988622b2f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.916000891Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765959649915973018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551113,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4af9e69c-fbb9-460c-b4e6-42f988622b2f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.917458226Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21642165-275f-46d2-a03c-079ec0ed0fcb name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.917514565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21642165-275f-46d2-a03c-079ec0ed0fcb name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.917881154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf569d2865ae1a63f220e13043bfd45b458b3183bbf796ee69bd7d9462292203,PodSandboxId:7b84df0244124b3793a6bead7b86653774a3b074f3abb7b4bd13da2d5016b410,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765959507727275266,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 796dd940-7d71-4204-ae81-121379bea215,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:027b8091346c2f0ad613404b5c65bb43b1bbaec8bd52e2e852ed39b124c8b716,PodSandboxId:f09fe312eaee5a2970d528ea75116a119655ff8fb98ffaad0434def80d1be2cb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765959467154198292,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ba9b1e5-27b4-431a-8877-f939828de8e0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ccc7164eb6fea6caa560ad22347d2ccbd8e08dd7379d5592d4ac7668d93dac9,PodSandboxId:e51fc110cdddceb21ef00a9ed14948ab193ec5c842a767bc2fa3a2fb5f82b647,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765959459681791201,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-6mm56,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 93cad92c-6b1a-49e3-9a49-7ff7de55dd3b,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:449a3639941b0fc2ba5cc45502515436eaad72c449280302dacea13ff2d03455,PodSandboxId:a6ba8702e661594c804deb9d8845b6905ae0576429794ddc211ae42dfbdfe905,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765959441815009667,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5l9hl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fe23a914-1b79-48ca-97ee-972fe3826f3c,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fea2969cab6aa4a88e41e740497d6ab0b591dbe27bc75d3d08b403cd19b2b3d7,PodSandboxId:43552eb4d293efb9b7120ce7fb2b94a17381c03e28f8d5b6712d9a7056606f4b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765959428658472108,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9kfwn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 586053c8-eefb-4751-94ae-019563b00b73,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ef79e35a7e6d332577e9139d384414835625d1b308a627289b07325ba98f1d,PodSandboxId:95e4bf24ddd99eaf9cc69d9b5a3f1f14165c5ad9474516beefb351c1bde18a79,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765959410676100265,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0c2344-6104-411e-8897-7546cbed0000,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e15c4ac6d2cd529e4aae590f2dbaba5c223aa94d6e8ad2209b058a6066a7db9b,PodSandboxId:8cd07ffaa4853a601338fff4f8fd722c21ad0a43aa3cb4509978b94f20e99a7c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765959397813221089,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2ncbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4450da79-d912-4714-9520-5bb210b71e93,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e649dca23d592d2d8593c371a115469cf04e57d7f3966505a0128d1f86d111,PodSandboxId:55c56d43188aac11738007edd491c2531b2595da0b27f3323b63285208da6f4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765959389176766303,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fa9a663-856d-45f5-a8ac-24b302c5850b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b4130ecb4458b98757876551cc183214a6574f44f0975b520e3ed28296f319,PodSandboxId:5cc643adf3eac7dfb7471cec5b88d8e0f2012f617357a806bb1c79349c84c785,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765959381561677563,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-r5vph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9727d5-0f6a-453c-a750-7c839f49a7f3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd5764d72d08733b06af4fb200f737e06edd08eda4a1997abb76c1c3740a8bc,PodSandboxId:2af2d2f16a4fe8a2ef7ebe3f1eabf3164777484c23438e7c95c80a93171ca255,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765959380630285398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cvpvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e6b4991-9f62-4b87-9016-9a20ced841d4,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083799c842945f40d47253da936788b9111ee64723491e10dffcd33a1965da53,PodSandboxId:04a76fe4048511a78934b2af44f43918e79acd9fb979cdb7a4e4994956b7abc7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765959368966682217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-102582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e6cc4bb220c85258ecd6fa48bc3e2e,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85a8663bb7785064dad39fe626a60a22eb5397025a7a29fe476730166000863,PodSandboxId:6cdf352d3192eaa295f520bdcddeb211be722662fc2ee72fa4ef0351c3346c34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765959368977298280,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-102582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23df8f88f439ef48b3fd25e852446273,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:645fc121be9292d194f36a33d0605b544a1b414210811ecce4686280c49d0acd,PodSandboxId:fc2e573ec8844116bb51be05d8ea7b8b3fedcf226bfc18ccb257434ff552e2fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765959368981573513,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-102582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1425dde2833e1ca56b06c2818fe93d25,},Annotations:map[string]string{io.kubernete
s.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a79370e7844d56d81460604102e695e343b11cfbfd85800f09d320449b336e65,PodSandboxId:b802330ce9d74438bd2fc1ac0410a3fe4ef69c7075887b1f03a3a0b19c1d0339,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765959368923025085,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-102582,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 76eb23482bc644de1eb1bbacb5c84312,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21642165-275f-46d2-a03c-079ec0ed0fcb name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.949842239Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb896ac7-797f-4bb4-9b79-44a2e8f0148c name=/runtime.v1.RuntimeService/Version
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.949949961Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb896ac7-797f-4bb4-9b79-44a2e8f0148c name=/runtime.v1.RuntimeService/Version
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.952575312Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f99533a2-cdee-4628-b34e-8bae3808003c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.954047348Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765959649954023843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551113,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f99533a2-cdee-4628-b34e-8bae3808003c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.955242863Z" level=debug msg="Ping https://registry-1.docker.io/v2/ status 401" file="docker/docker_client.go:901"
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.955729484Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c226b51-ceed-412b-b976-2d25ff58c9d5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.955786796Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c226b51-ceed-412b-b976-2d25ff58c9d5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.956005097Z" level=debug msg="GET https://auth.docker.io/token?scope=repository%3Akicbase%2Fecho-server%3Apull&service=registry.docker.io" file="docker/docker_client.go:861"
	Dec 17 08:20:49 addons-102582 crio[817]: time="2025-12-17 08:20:49.956259520Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf569d2865ae1a63f220e13043bfd45b458b3183bbf796ee69bd7d9462292203,PodSandboxId:7b84df0244124b3793a6bead7b86653774a3b074f3abb7b4bd13da2d5016b410,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765959507727275266,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 796dd940-7d71-4204-ae81-121379bea215,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:027b8091346c2f0ad613404b5c65bb43b1bbaec8bd52e2e852ed39b124c8b716,PodSandboxId:f09fe312eaee5a2970d528ea75116a119655ff8fb98ffaad0434def80d1be2cb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765959467154198292,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ba9b1e5-27b4-431a-8877-f939828de8e0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ccc7164eb6fea6caa560ad22347d2ccbd8e08dd7379d5592d4ac7668d93dac9,PodSandboxId:e51fc110cdddceb21ef00a9ed14948ab193ec5c842a767bc2fa3a2fb5f82b647,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765959459681791201,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-6mm56,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 93cad92c-6b1a-49e3-9a49-7ff7de55dd3b,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:449a3639941b0fc2ba5cc45502515436eaad72c449280302dacea13ff2d03455,PodSandboxId:a6ba8702e661594c804deb9d8845b6905ae0576429794ddc211ae42dfbdfe905,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765959441815009667,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5l9hl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fe23a914-1b79-48ca-97ee-972fe3826f3c,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fea2969cab6aa4a88e41e740497d6ab0b591dbe27bc75d3d08b403cd19b2b3d7,PodSandboxId:43552eb4d293efb9b7120ce7fb2b94a17381c03e28f8d5b6712d9a7056606f4b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765959428658472108,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9kfwn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 586053c8-eefb-4751-94ae-019563b00b73,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ef79e35a7e6d332577e9139d384414835625d1b308a627289b07325ba98f1d,PodSandboxId:95e4bf24ddd99eaf9cc69d9b5a3f1f14165c5ad9474516beefb351c1bde18a79,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765959410676100265,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0c2344-6104-411e-8897-7546cbed0000,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e15c4ac6d2cd529e4aae590f2dbaba5c223aa94d6e8ad2209b058a6066a7db9b,PodSandboxId:8cd07ffaa4853a601338fff4f8fd722c21ad0a43aa3cb4509978b94f20e99a7c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765959397813221089,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2ncbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4450da79-d912-4714-9520-5bb210b71e93,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e649dca23d592d2d8593c371a115469cf04e57d7f3966505a0128d1f86d111,PodSandboxId:55c56d43188aac11738007edd491c2531b2595da0b27f3323b63285208da6f4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765959389176766303,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fa9a663-856d-45f5-a8ac-24b302c5850b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b4130ecb4458b98757876551cc183214a6574f44f0975b520e3ed28296f319,PodSandboxId:5cc643adf3eac7dfb7471cec5b88d8e0f2012f617357a806bb1c79349c84c785,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765959381561677563,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-r5vph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9727d5-0f6a-453c-a750-7c839f49a7f3,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd5764d72d08733b06af4fb200f737e06edd08eda4a1997abb76c1c3740a8bc,PodSandboxId:2af2d2f16a4fe8a2ef7ebe3f1eabf3164777484c23438e7c95c80a93171ca255,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765959380630285398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cvpvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e6b4991-9f62-4b87-9016-9a20ced841d4,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083799c842945f40d47253da936788b9111ee64723491e10dffcd33a1965da53,PodSandboxId:04a76fe4048511a78934b2af44f43918e79acd9fb979cdb7a4e4994956b7abc7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765959368966682217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-102582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e6cc4bb220c85258ecd6fa48bc3e2e,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85a8663bb7785064dad39fe626a60a22eb5397025a7a29fe476730166000863,PodSandboxId:6cdf352d3192eaa295f520bdcddeb211be722662fc2ee72fa4ef0351c3346c34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765959368977298280,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-102582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23df8f88f439ef48b3fd25e852446273,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:645fc121be9292d194f36a33d0605b544a1b414210811ecce4686280c49d0acd,PodSandboxId:fc2e573ec8844116bb51be05d8ea7b8b3fedcf226bfc18ccb257434ff552e2fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765959368981573513,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-102582,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1425dde2833e1ca56b06c2818fe93d25,},Annotations:map[string]string{io.kubernete
s.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a79370e7844d56d81460604102e695e343b11cfbfd85800f09d320449b336e65,PodSandboxId:b802330ce9d74438bd2fc1ac0410a3fe4ef69c7075887b1f03a3a0b19c1d0339,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765959368923025085,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-102582,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 76eb23482bc644de1eb1bbacb5c84312,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c226b51-ceed-412b-b976-2d25ff58c9d5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	cf569d2865ae1       public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff                           2 minutes ago       Running             nginx                     0                   7b84df0244124       nginx                                       default
	027b8091346c2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   f09fe312eaee5       busybox                                     default
	2ccc7164eb6fe       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad             3 minutes ago       Running             controller                0                   e51fc110cdddc       ingress-nginx-controller-85d4c799dd-6mm56   ingress-nginx
	449a3639941b0       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                             3 minutes ago       Exited              patch                     2                   a6ba8702e6615       ingress-nginx-admission-patch-5l9hl         ingress-nginx
	fea2969cab6aa       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              create                    0                   43552eb4d293e       ingress-nginx-admission-create-9kfwn        ingress-nginx
	97ef79e35a7e6       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago       Running             minikube-ingress-dns      0                   95e4bf24ddd99       kube-ingress-dns-minikube                   kube-system
	e15c4ac6d2cd5       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   8cd07ffaa4853       amd-gpu-device-plugin-2ncbq                 kube-system
	d4e649dca23d5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   55c56d43188aa       storage-provisioner                         kube-system
	26b4130ecb445       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   5cc643adf3eac       coredns-66bc5c9577-r5vph                    kube-system
	8fd5764d72d08       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                             4 minutes ago       Running             kube-proxy                0                   2af2d2f16a4fe       kube-proxy-cvpvx                            kube-system
	645fc121be929       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             4 minutes ago       Running             etcd                      0                   fc2e573ec8844       etcd-addons-102582                          kube-system
	d85a8663bb778       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                             4 minutes ago       Running             kube-controller-manager   0                   6cdf352d3192e       kube-controller-manager-addons-102582       kube-system
	083799c842945       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                             4 minutes ago       Running             kube-scheduler            0                   04a76fe404851       kube-scheduler-addons-102582                kube-system
	a79370e7844d5       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                             4 minutes ago       Running             kube-apiserver            0                   b802330ce9d74       kube-apiserver-addons-102582                kube-system
	
	
	==> coredns [26b4130ecb4458b98757876551cc183214a6574f44f0975b520e3ed28296f319] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	[INFO] Reloading complete
	[INFO] 10.244.0.23:40274 - 41080 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000453775s
	[INFO] 10.244.0.23:51355 - 41510 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000566784s
	[INFO] 10.244.0.23:47767 - 7762 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124219s
	[INFO] 10.244.0.23:56275 - 15045 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000151911s
	[INFO] 10.244.0.23:41210 - 56482 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085367s
	[INFO] 10.244.0.23:33030 - 63369 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000196674s
	[INFO] 10.244.0.23:34079 - 19809 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000904221s
	[INFO] 10.244.0.23:35704 - 15673 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001321552s
	[INFO] 10.244.0.28:59777 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000324438s
	[INFO] 10.244.0.28:38752 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000571275s
	
	
	==> describe nodes <==
	Name:               addons-102582
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-102582
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=addons-102582
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_16_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-102582
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:16:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-102582
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:20:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:18:48 +0000   Wed, 17 Dec 2025 08:16:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:18:48 +0000   Wed, 17 Dec 2025 08:16:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:18:48 +0000   Wed, 17 Dec 2025 08:16:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 08:18:48 +0000   Wed, 17 Dec 2025 08:16:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    addons-102582
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 6bca442ebd30417384e1edaca2232929
	  System UUID:                6bca442e-bd30-4173-84e1-edaca2232929
	  Boot ID:                    134038ab-9666-4dda-b76a-50c70c261488
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m5s
	  default                     hello-world-app-5d498dc89-kn6rf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-6mm56    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m22s
	  kube-system                 amd-gpu-device-plugin-2ncbq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 coredns-66bc5c9577-r5vph                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m30s
	  kube-system                 etcd-addons-102582                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m37s
	  kube-system                 kube-apiserver-addons-102582                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 kube-controller-manager-addons-102582        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-proxy-cvpvx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-scheduler-addons-102582                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m28s  kube-proxy       
	  Normal  Starting                 4m36s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m36s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m36s  kubelet          Node addons-102582 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m36s  kubelet          Node addons-102582 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m36s  kubelet          Node addons-102582 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m35s  kubelet          Node addons-102582 status is now: NodeReady
	  Normal  RegisteredNode           4m32s  node-controller  Node addons-102582 event: Registered Node addons-102582 in Controller
	
	
	==> dmesg <==
	[  +0.000067] kauditd_printk_skb: 310 callbacks suppressed
	[  +1.916268] kauditd_printk_skb: 361 callbacks suppressed
	[  +6.098641] kauditd_printk_skb: 20 callbacks suppressed
	[  +8.765458] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.070449] kauditd_printk_skb: 26 callbacks suppressed
	[Dec17 08:17] kauditd_printk_skb: 101 callbacks suppressed
	[  +5.982432] kauditd_printk_skb: 48 callbacks suppressed
	[  +2.989610] kauditd_printk_skb: 76 callbacks suppressed
	[  +5.087719] kauditd_printk_skb: 95 callbacks suppressed
	[  +1.007814] kauditd_printk_skb: 115 callbacks suppressed
	[  +0.000109] kauditd_printk_skb: 7 callbacks suppressed
	[  +0.000108] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.096049] kauditd_printk_skb: 68 callbacks suppressed
	[  +6.377374] kauditd_printk_skb: 47 callbacks suppressed
	[  +4.636908] kauditd_printk_skb: 2 callbacks suppressed
	[Dec17 08:18] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.000089] kauditd_printk_skb: 109 callbacks suppressed
	[  +0.000255] kauditd_printk_skb: 159 callbacks suppressed
	[  +4.463465] kauditd_printk_skb: 188 callbacks suppressed
	[  +5.926141] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.002092] kauditd_printk_skb: 15 callbacks suppressed
	[  +4.423344] kauditd_printk_skb: 73 callbacks suppressed
	[Dec17 08:19] kauditd_printk_skb: 30 callbacks suppressed
	[  +7.833686] kauditd_printk_skb: 41 callbacks suppressed
	[Dec17 08:20] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [645fc121be9292d194f36a33d0605b544a1b414210811ecce4686280c49d0acd] <==
	{"level":"info","ts":"2025-12-17T08:17:11.826241Z","caller":"traceutil/trace.go:172","msg":"trace[381537229] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1010; }","duration":"241.319115ms","start":"2025-12-17T08:17:11.584911Z","end":"2025-12-17T08:17:11.826230Z","steps":["trace[381537229] 'range keys from in-memory index tree'  (duration: 240.367758ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T08:17:11.825814Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.899752ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T08:17:11.826774Z","caller":"traceutil/trace.go:172","msg":"trace[888901256] range","detail":"{range_begin:/registry/replicasets; range_end:; response_count:0; response_revision:1010; }","duration":"147.858301ms","start":"2025-12-17T08:17:11.678907Z","end":"2025-12-17T08:17:11.826765Z","steps":["trace[888901256] 'range keys from in-memory index tree'  (duration: 146.857965ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T08:17:11.825833Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"238.521496ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T08:17:11.827929Z","caller":"traceutil/trace.go:172","msg":"trace[1635440779] range","detail":"{range_begin:/registry/horizontalpodautoscalers; range_end:; response_count:0; response_revision:1010; }","duration":"240.611681ms","start":"2025-12-17T08:17:11.587309Z","end":"2025-12-17T08:17:11.827920Z","steps":["trace[1635440779] 'range keys from in-memory index tree'  (duration: 238.471446ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:17:20.483336Z","caller":"traceutil/trace.go:172","msg":"trace[42957356] transaction","detail":"{read_only:false; response_revision:1074; number_of_response:1; }","duration":"134.593116ms","start":"2025-12-17T08:17:20.348729Z","end":"2025-12-17T08:17:20.483322Z","steps":["trace[42957356] 'process raft request'  (duration: 133.953061ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:17:24.597459Z","caller":"traceutil/trace.go:172","msg":"trace[1277886843] transaction","detail":"{read_only:false; response_revision:1111; number_of_response:1; }","duration":"407.213285ms","start":"2025-12-17T08:17:24.190236Z","end":"2025-12-17T08:17:24.597450Z","steps":["trace[1277886843] 'process raft request'  (duration: 407.02785ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T08:17:24.597575Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T08:17:24.190215Z","time spent":"407.297237ms","remote":"127.0.0.1:57570","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4457,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" mod_revision:689 > success:<request_put:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" value_size:4391 >> failure:<request_range:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" > >"}
	{"level":"info","ts":"2025-12-17T08:17:24.597156Z","caller":"traceutil/trace.go:172","msg":"trace[1868633216] linearizableReadLoop","detail":"{readStateIndex:1139; appliedIndex:1139; }","duration":"376.110633ms","start":"2025-12-17T08:17:24.221027Z","end":"2025-12-17T08:17:24.597138Z","steps":["trace[1868633216] 'read index received'  (duration: 376.105979ms)","trace[1868633216] 'applied index is now lower than readState.Index'  (duration: 3.943µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T08:17:24.598233Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"377.22392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-5l9hl\" limit:1 ","response":"range_response_count:1 size:4635"}
	{"level":"info","ts":"2025-12-17T08:17:24.598329Z","caller":"traceutil/trace.go:172","msg":"trace[2108703007] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-5l9hl; range_end:; response_count:1; response_revision:1111; }","duration":"377.323738ms","start":"2025-12-17T08:17:24.220998Z","end":"2025-12-17T08:17:24.598322Z","steps":["trace[2108703007] 'agreement among raft nodes before linearized reading'  (duration: 377.107599ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T08:17:24.598408Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T08:17:24.220982Z","time spent":"377.418818ms","remote":"127.0.0.1:57474","response type":"/etcdserverpb.KV/Range","request count":0,"request size":68,"response count":1,"response size":4659,"request content":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-5l9hl\" limit:1 "}
	{"level":"warn","ts":"2025-12-17T08:17:24.598619Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"260.239145ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T08:17:24.598706Z","caller":"traceutil/trace.go:172","msg":"trace[692672516] range","detail":"{range_begin:/registry/roles; range_end:; response_count:0; response_revision:1111; }","duration":"260.326561ms","start":"2025-12-17T08:17:24.338373Z","end":"2025-12-17T08:17:24.598700Z","steps":["trace[692672516] 'agreement among raft nodes before linearized reading'  (duration: 260.216703ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T08:17:24.601402Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"206.154325ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T08:17:24.601604Z","caller":"traceutil/trace.go:172","msg":"trace[1584530976] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1111; }","duration":"206.395099ms","start":"2025-12-17T08:17:24.395201Z","end":"2025-12-17T08:17:24.601596Z","steps":["trace[1584530976] 'agreement among raft nodes before linearized reading'  (duration: 206.136354ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:17:38.863007Z","caller":"traceutil/trace.go:172","msg":"trace[1479114485] transaction","detail":"{read_only:false; response_revision:1144; number_of_response:1; }","duration":"136.93292ms","start":"2025-12-17T08:17:38.726060Z","end":"2025-12-17T08:17:38.862993Z","steps":["trace[1479114485] 'process raft request'  (duration: 136.719823ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:17:41.015983Z","caller":"traceutil/trace.go:172","msg":"trace[2007833704] transaction","detail":"{read_only:false; response_revision:1156; number_of_response:1; }","duration":"142.216085ms","start":"2025-12-17T08:17:40.873754Z","end":"2025-12-17T08:17:41.015970Z","steps":["trace[2007833704] 'process raft request'  (duration: 142.147538ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:18:11.291688Z","caller":"traceutil/trace.go:172","msg":"trace[164817129] linearizableReadLoop","detail":"{readStateIndex:1399; appliedIndex:1399; }","duration":"144.46303ms","start":"2025-12-17T08:18:11.147205Z","end":"2025-12-17T08:18:11.291668Z","steps":["trace[164817129] 'read index received'  (duration: 144.456531ms)","trace[164817129] 'applied index is now lower than readState.Index'  (duration: 5.703µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T08:18:11.292391Z","caller":"traceutil/trace.go:172","msg":"trace[1169278981] transaction","detail":"{read_only:false; response_revision:1359; number_of_response:1; }","duration":"203.445639ms","start":"2025-12-17T08:18:11.088901Z","end":"2025-12-17T08:18:11.292346Z","steps":["trace[1169278981] 'process raft request'  (duration: 203.228568ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T08:18:11.292244Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.02088ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T08:18:11.293366Z","caller":"traceutil/trace.go:172","msg":"trace[322420264] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1358; }","duration":"146.150435ms","start":"2025-12-17T08:18:11.147202Z","end":"2025-12-17T08:18:11.293352Z","steps":["trace[322420264] 'agreement among raft nodes before linearized reading'  (duration: 144.929445ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:18:11.294985Z","caller":"traceutil/trace.go:172","msg":"trace[1486409157] transaction","detail":"{read_only:false; response_revision:1360; number_of_response:1; }","duration":"114.684314ms","start":"2025-12-17T08:18:11.180292Z","end":"2025-12-17T08:18:11.294976Z","steps":["trace[1486409157] 'process raft request'  (duration: 114.638932ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:18:15.801068Z","caller":"traceutil/trace.go:172","msg":"trace[449840320] transaction","detail":"{read_only:false; response_revision:1395; number_of_response:1; }","duration":"133.580663ms","start":"2025-12-17T08:18:15.667466Z","end":"2025-12-17T08:18:15.801047Z","steps":["trace[449840320] 'process raft request'  (duration: 133.485505ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:18:41.323436Z","caller":"traceutil/trace.go:172","msg":"trace[523262597] transaction","detail":"{read_only:false; response_revision:1616; number_of_response:1; }","duration":"159.282929ms","start":"2025-12-17T08:18:41.164132Z","end":"2025-12-17T08:18:41.323415Z","steps":["trace[523262597] 'process raft request'  (duration: 159.139229ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:20:50 up 5 min,  0 users,  load average: 0.51, 1.14, 0.60
	Linux addons-102582 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 03:41:16 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [a79370e7844d56d81460604102e695e343b11cfbfd85800f09d320449b336e65] <==
	 > logger="UnhandledError"
	E1217 08:17:09.450699       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.49.209:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.49.209:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1217 08:17:09.472055       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1217 08:17:54.503396       1 conn.go:339] Error on socket receive: read tcp 192.168.39.110:8443->192.168.39.1:48192: use of closed network connection
	E1217 08:17:54.698384       1 conn.go:339] Error on socket receive: read tcp 192.168.39.110:8443->192.168.39.1:48216: use of closed network connection
	I1217 08:18:04.059131       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.68.184"}
	I1217 08:18:24.041195       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1217 08:18:24.226011       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.31.138"}
	E1217 08:18:31.995470       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1217 08:18:47.136304       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1217 08:19:10.472174       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1217 08:19:12.873084       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1217 08:19:12.873309       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1217 08:19:12.918840       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1217 08:19:12.918892       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1217 08:19:12.922463       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1217 08:19:12.922510       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1217 08:19:12.944179       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1217 08:19:12.945709       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1217 08:19:12.976239       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1217 08:19:12.976288       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1217 08:19:13.923180       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1217 08:19:13.977170       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1217 08:19:13.988295       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1217 08:20:48.879497       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.228.133"}
	
	
	==> kube-controller-manager [d85a8663bb7785064dad39fe626a60a22eb5397025a7a29fe476730166000863] <==
	I1217 08:19:20.181315       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1217 08:19:22.101929       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 08:19:22.102968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 08:19:23.276286       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 08:19:23.277416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 08:19:23.934513       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 08:19:23.935465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 08:19:31.284514       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 08:19:31.285679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 08:19:35.459509       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 08:19:35.460449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 08:19:35.489381       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 08:19:35.490388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 08:19:50.162794       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 08:19:50.163769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 08:19:50.449222       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 08:19:50.450214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 08:19:56.618007       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 08:19:56.619015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 08:20:30.068786       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 08:20:30.070273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 08:20:33.409566       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 08:20:33.410571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 08:20:37.602495       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 08:20:37.603721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [8fd5764d72d08733b06af4fb200f737e06edd08eda4a1997abb76c1c3740a8bc] <==
	I1217 08:16:21.521608       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 08:16:21.630152       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 08:16:21.644392       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.110"]
	E1217 08:16:21.644499       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:16:21.778037       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 08:16:21.810888       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 08:16:21.838126       1 server_linux.go:132] "Using iptables Proxier"
	I1217 08:16:21.977086       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:16:21.977447       1 server.go:527] "Version info" version="v1.34.3"
	I1217 08:16:21.977459       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:16:21.999971       1 config.go:200] "Starting service config controller"
	I1217 08:16:21.999999       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:16:22.000028       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:16:22.000032       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:16:22.000042       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:16:22.000045       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:16:22.000478       1 config.go:309] "Starting node config controller"
	I1217 08:16:22.000505       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:16:22.000511       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 08:16:22.101380       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 08:16:22.101431       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 08:16:22.108702       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [083799c842945f40d47253da936788b9111ee64723491e10dffcd33a1965da53] <==
	E1217 08:16:11.910551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 08:16:11.910895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 08:16:11.910911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 08:16:11.911133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 08:16:11.911151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 08:16:11.911237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 08:16:11.911245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 08:16:11.911327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 08:16:11.911337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 08:16:11.911440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 08:16:12.769459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 08:16:12.835082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 08:16:12.873181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 08:16:12.889747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 08:16:12.980100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 08:16:13.043755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 08:16:13.058099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 08:16:13.080027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 08:16:13.080709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 08:16:13.114669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 08:16:13.186786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 08:16:13.222800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 08:16:13.237305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 08:16:13.266249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1217 08:16:15.797356       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 08:19:16 addons-102582 kubelet[1505]: I1217 08:19:16.243065    1505 scope.go:117] "RemoveContainer" containerID="9ce8036b7bf9c77b688bba4bfe79bb0a4042901aeb85b8267f45421ed66529b1"
	Dec 17 08:19:16 addons-102582 kubelet[1505]: I1217 08:19:16.805253    1505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ca3cc75-0e61-46df-a30e-15c3e48f03a5" path="/var/lib/kubelet/pods/0ca3cc75-0e61-46df-a30e-15c3e48f03a5/volumes"
	Dec 17 08:19:16 addons-102582 kubelet[1505]: I1217 08:19:16.805739    1505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13f8e1a4-8dd0-4b67-b723-e087102dad22" path="/var/lib/kubelet/pods/13f8e1a4-8dd0-4b67-b723-e087102dad22/volumes"
	Dec 17 08:19:16 addons-102582 kubelet[1505]: I1217 08:19:16.806501    1505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b66231e-f684-4b05-a09d-9e69a5da45fd" path="/var/lib/kubelet/pods/1b66231e-f684-4b05-a09d-9e69a5da45fd/volumes"
	Dec 17 08:19:25 addons-102582 kubelet[1505]: E1217 08:19:25.139528    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765959565139119690  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 08:19:25 addons-102582 kubelet[1505]: E1217 08:19:25.139572    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765959565139119690  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 08:19:35 addons-102582 kubelet[1505]: E1217 08:19:35.142330    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765959575141955497  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 08:19:35 addons-102582 kubelet[1505]: E1217 08:19:35.142398    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765959575141955497  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 08:19:45 addons-102582 kubelet[1505]: E1217 08:19:45.145244    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765959585144928181  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 08:19:45 addons-102582 kubelet[1505]: E1217 08:19:45.145286    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765959585144928181  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 08:19:55 addons-102582 kubelet[1505]: E1217 08:19:55.148467    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765959595148062808  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 08:19:55 addons-102582 kubelet[1505]: E1217 08:19:55.148511    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765959595148062808  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 08:20:05 addons-102582 kubelet[1505]: E1217 08:20:05.151804    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765959605151204572  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 08:20:05 addons-102582 kubelet[1505]: E1217 08:20:05.152271    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765959605151204572  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 08:20:15 addons-102582 kubelet[1505]: E1217 08:20:15.155232    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765959615154786808  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 08:20:15 addons-102582 kubelet[1505]: E1217 08:20:15.155258    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765959615154786808  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 08:20:25 addons-102582 kubelet[1505]: E1217 08:20:25.159089    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765959625158559564  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 08:20:25 addons-102582 kubelet[1505]: E1217 08:20:25.159117    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765959625158559564  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 08:20:26 addons-102582 kubelet[1505]: I1217 08:20:26.803833    1505 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-2ncbq" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 08:20:35 addons-102582 kubelet[1505]: E1217 08:20:35.162590    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765959635162174605  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 08:20:35 addons-102582 kubelet[1505]: E1217 08:20:35.162691    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765959635162174605  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 08:20:35 addons-102582 kubelet[1505]: I1217 08:20:35.799051    1505 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 08:20:45 addons-102582 kubelet[1505]: E1217 08:20:45.165962    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765959645165517402  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 08:20:45 addons-102582 kubelet[1505]: E1217 08:20:45.166012    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765959645165517402  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 08:20:48 addons-102582 kubelet[1505]: I1217 08:20:48.953182    1505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9n6f\" (UniqueName: \"kubernetes.io/projected/c4a7a4eb-bf3f-422b-ae1a-5bff4499cc31-kube-api-access-t9n6f\") pod \"hello-world-app-5d498dc89-kn6rf\" (UID: \"c4a7a4eb-bf3f-422b-ae1a-5bff4499cc31\") " pod="default/hello-world-app-5d498dc89-kn6rf"
	
	
	==> storage-provisioner [d4e649dca23d592d2d8593c371a115469cf04e57d7f3966505a0128d1f86d111] <==
	W1217 08:20:26.019983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:28.024052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:28.032031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:30.034767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:30.039830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:32.043210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:32.051055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:34.055575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:34.060416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:36.064476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:36.073272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:38.075990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:38.081224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:40.086573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:40.094042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:42.097060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:42.101555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:44.106175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:44.113081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:46.116365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:46.122023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:48.126020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:48.130772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:50.135374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:20:50.141842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-102582 -n addons-102582
helpers_test.go:270: (dbg) Run:  kubectl --context addons-102582 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-kn6rf ingress-nginx-admission-create-9kfwn ingress-nginx-admission-patch-5l9hl
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-102582 describe pod hello-world-app-5d498dc89-kn6rf ingress-nginx-admission-create-9kfwn ingress-nginx-admission-patch-5l9hl
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-102582 describe pod hello-world-app-5d498dc89-kn6rf ingress-nginx-admission-create-9kfwn ingress-nginx-admission-patch-5l9hl: exit status 1 (71.023564ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-kn6rf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-102582/192.168.39.110
	Start Time:       Wed, 17 Dec 2025 08:20:48 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t9n6f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-t9n6f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-kn6rf to addons-102582
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-9kfwn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5l9hl" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-102582 describe pod hello-world-app-5d498dc89-kn6rf ingress-nginx-admission-create-9kfwn ingress-nginx-admission-patch-5l9hl: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102582 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-102582 addons disable ingress-dns --alsologtostderr -v=1: (1.384986923s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102582 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-102582 addons disable ingress --alsologtostderr -v=1: (7.718491483s)
--- FAIL: TestAddons/parallel/Ingress (156.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-122342 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-122342 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-122342 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-122342 --alsologtostderr -v=1] stderr:
I1217 08:26:39.491961  903852 out.go:360] Setting OutFile to fd 1 ...
I1217 08:26:39.492081  903852 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:26:39.492092  903852 out.go:374] Setting ErrFile to fd 2...
I1217 08:26:39.492097  903852 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:26:39.492365  903852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
I1217 08:26:39.492633  903852 mustload.go:66] Loading cluster: functional-122342
I1217 08:26:39.492969  903852 config.go:182] Loaded profile config "functional-122342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 08:26:39.494916  903852 host.go:66] Checking if "functional-122342" exists ...
I1217 08:26:39.495163  903852 api_server.go:166] Checking apiserver status ...
I1217 08:26:39.495210  903852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1217 08:26:39.497636  903852 main.go:143] libmachine: domain functional-122342 has defined MAC address 52:54:00:ba:6d:2c in network mk-functional-122342
I1217 08:26:39.498135  903852 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:6d:2c", ip: ""} in network mk-functional-122342: {Iface:virbr1 ExpiryTime:2025-12-17 09:23:27 +0000 UTC Type:0 Mac:52:54:00:ba:6d:2c Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-122342 Clientid:01:52:54:00:ba:6d:2c}
I1217 08:26:39.498177  903852 main.go:143] libmachine: domain functional-122342 has defined IP address 192.168.39.97 and MAC address 52:54:00:ba:6d:2c in network mk-functional-122342
I1217 08:26:39.498357  903852 sshutil.go:56] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/functional-122342/id_rsa Username:docker}
I1217 08:26:39.597552  903852 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5892/cgroup
W1217 08:26:39.608753  903852 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5892/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1217 08:26:39.608843  903852 ssh_runner.go:195] Run: ls
I1217 08:26:39.613885  903852 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8441/healthz ...
I1217 08:26:39.620093  903852 api_server.go:279] https://192.168.39.97:8441/healthz returned 200:
ok
W1217 08:26:39.620138  903852 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1217 08:26:39.620294  903852 config.go:182] Loaded profile config "functional-122342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 08:26:39.620311  903852 addons.go:70] Setting dashboard=true in profile "functional-122342"
I1217 08:26:39.620320  903852 addons.go:239] Setting addon dashboard=true in "functional-122342"
I1217 08:26:39.620359  903852 host.go:66] Checking if "functional-122342" exists ...
I1217 08:26:39.623713  903852 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1217 08:26:39.624965  903852 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1217 08:26:39.626075  903852 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1217 08:26:39.626097  903852 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1217 08:26:39.628596  903852 main.go:143] libmachine: domain functional-122342 has defined MAC address 52:54:00:ba:6d:2c in network mk-functional-122342
I1217 08:26:39.629015  903852 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:6d:2c", ip: ""} in network mk-functional-122342: {Iface:virbr1 ExpiryTime:2025-12-17 09:23:27 +0000 UTC Type:0 Mac:52:54:00:ba:6d:2c Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-122342 Clientid:01:52:54:00:ba:6d:2c}
I1217 08:26:39.629040  903852 main.go:143] libmachine: domain functional-122342 has defined IP address 192.168.39.97 and MAC address 52:54:00:ba:6d:2c in network mk-functional-122342
I1217 08:26:39.629175  903852 sshutil.go:56] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/functional-122342/id_rsa Username:docker}
I1217 08:26:39.727767  903852 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1217 08:26:39.727802  903852 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1217 08:26:39.749347  903852 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1217 08:26:39.749380  903852 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1217 08:26:39.770233  903852 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1217 08:26:39.770261  903852 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1217 08:26:39.793877  903852 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1217 08:26:39.793900  903852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1217 08:26:39.813863  903852 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1217 08:26:39.813903  903852 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1217 08:26:39.834932  903852 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1217 08:26:39.834966  903852 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1217 08:26:39.855531  903852 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1217 08:26:39.855557  903852 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1217 08:26:39.875481  903852 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1217 08:26:39.875535  903852 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1217 08:26:39.895113  903852 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1217 08:26:39.895147  903852 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1217 08:26:39.915860  903852 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1217 08:26:40.627394  903852 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-122342 addons enable metrics-server

                                                
                                                
I1217 08:26:40.628597  903852 addons.go:202] Writing out "functional-122342" config to set dashboard=true...
W1217 08:26:40.628846  903852 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1217 08:26:40.629466  903852 kapi.go:59] client config for functional-122342: &rest.Config{Host:"https://192.168.39.97:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.key", CAFile:"/home/jenkins/minikube-integration/22182-893359/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2818a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1217 08:26:40.629960  903852 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1217 08:26:40.629976  903852 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1217 08:26:40.629980  903852 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1217 08:26:40.629984  903852 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1217 08:26:40.629988  903852 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1217 08:26:40.638975  903852 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  9af51bd8-3ee4-4d2c-a4d3-0bbf9992ba5f 876 0 2025-12-17 08:26:40 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-17 08:26:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.98.16.212,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.98.16.212],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1217 08:26:40.639131  903852 out.go:285] * Launching proxy ...
* Launching proxy ...
I1217 08:26:40.639191  903852 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-122342 proxy --port 36195]
I1217 08:26:40.639588  903852 dashboard.go:159] Waiting for kubectl to output host:port ...
I1217 08:26:40.681565  903852 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1217 08:26:40.681630  903852 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1217 08:26:40.692885  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[81ba2246-d053-4301-8532-d1e2aa6b4a1f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:40 GMT]] Body:0xc00158ed00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208c80 TLS:<nil>}
I1217 08:26:40.692989  903852 retry.go:31] will retry after 129.488µs: Temporary Error: unexpected response code: 503
I1217 08:26:40.696469  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1eae3e59-21a3-4288-9ca8-ad2105ba1ac8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:40 GMT]] Body:0xc00171ae40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000045900 TLS:<nil>}
I1217 08:26:40.696558  903852 retry.go:31] will retry after 147.11µs: Temporary Error: unexpected response code: 503
I1217 08:26:40.700063  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dfea39af-2923-4947-a68d-91b1a60fae0f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:40 GMT]] Body:0xc00158ee00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0b40 TLS:<nil>}
I1217 08:26:40.700101  903852 retry.go:31] will retry after 330.59µs: Temporary Error: unexpected response code: 503
I1217 08:26:40.703658  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[26180ec4-0b41-44cd-a3c0-58cc81ac33e1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:40 GMT]] Body:0xc000858b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000045a40 TLS:<nil>}
I1217 08:26:40.703702  903852 retry.go:31] will retry after 392.163µs: Temporary Error: unexpected response code: 503
I1217 08:26:40.706980  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2677726d-ec44-4b8e-9fb0-0fc3b5447a60] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:40 GMT]] Body:0xc00158ef00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208f00 TLS:<nil>}
I1217 08:26:40.707019  903852 retry.go:31] will retry after 393.439µs: Temporary Error: unexpected response code: 503
I1217 08:26:40.710270  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c8d75865-401c-40f8-bb7f-436377e73e9f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:40 GMT]] Body:0xc00171af00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000045b80 TLS:<nil>}
I1217 08:26:40.710325  903852 retry.go:31] will retry after 454.676µs: Temporary Error: unexpected response code: 503
I1217 08:26:40.713686  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e4b34512-ea15-413e-8a3f-f66ac21bdad6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:40 GMT]] Body:0xc00158f000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0c80 TLS:<nil>}
I1217 08:26:40.713738  903852 retry.go:31] will retry after 656.12µs: Temporary Error: unexpected response code: 503
I1217 08:26:40.716755  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7c48dc8e-46ae-47de-9dfa-f87266a636c3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:40 GMT]] Body:0xc000858c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000045cc0 TLS:<nil>}
I1217 08:26:40.716800  903852 retry.go:31] will retry after 2.200951ms: Temporary Error: unexpected response code: 503
I1217 08:26:40.722425  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a9e3b9e0-5a9c-40d0-aa62-0c0e09575a6e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:40 GMT]] Body:0xc000858d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209180 TLS:<nil>}
I1217 08:26:40.722491  903852 retry.go:31] will retry after 2.078939ms: Temporary Error: unexpected response code: 503
I1217 08:26:40.727001  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d50ec502-6c7d-4e22-8cfd-6afb83cc53a3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:40 GMT]] Body:0xc00171b000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002092c0 TLS:<nil>}
I1217 08:26:40.727050  903852 retry.go:31] will retry after 3.183717ms: Temporary Error: unexpected response code: 503
I1217 08:26:40.733440  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[238104de-1150-4683-b67c-fb15d704bead] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:40 GMT]] Body:0xc000858e40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0dc0 TLS:<nil>}
I1217 08:26:40.733479  903852 retry.go:31] will retry after 7.306952ms: Temporary Error: unexpected response code: 503
I1217 08:26:40.743725  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2a1ed274-2190-4576-821a-1c01b503931c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:40 GMT]] Body:0xc00171b100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209400 TLS:<nil>}
I1217 08:26:40.743759  903852 retry.go:31] will retry after 4.402713ms: Temporary Error: unexpected response code: 503
I1217 08:26:40.751643  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[84f2affe-8615-472c-af9a-1750845d05c1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:40 GMT]] Body:0xc00171b1c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1040 TLS:<nil>}
I1217 08:26:40.751701  903852 retry.go:31] will retry after 6.891988ms: Temporary Error: unexpected response code: 503
I1217 08:26:40.761390  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[15ab5a06-8209-419d-8ecd-448ec83e89c9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:40 GMT]] Body:0xc00158f140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1180 TLS:<nil>}
I1217 08:26:40.761446  903852 retry.go:31] will retry after 13.287265ms: Temporary Error: unexpected response code: 503
I1217 08:26:40.777765  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1e65eb28-fdfc-44d4-8e91-7af6e234163f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:40 GMT]] Body:0xc000858f80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000045e00 TLS:<nil>}
I1217 08:26:40.777805  903852 retry.go:31] will retry after 40.94367ms: Temporary Error: unexpected response code: 503
I1217 08:26:40.822915  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6622e633-674c-48af-8b25-cef5a1d3ec23] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:40 GMT]] Body:0xc00171b2c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209540 TLS:<nil>}
I1217 08:26:40.822978  903852 retry.go:31] will retry after 26.124184ms: Temporary Error: unexpected response code: 503
I1217 08:26:40.852993  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ad0ea775-6006-4289-8173-84952a10ebd0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:40 GMT]] Body:0xc00171b380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c12c0 TLS:<nil>}
I1217 08:26:40.853082  903852 retry.go:31] will retry after 71.409015ms: Temporary Error: unexpected response code: 503
I1217 08:26:40.931048  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[85a9ec63-c338-49d5-afc4-69d6a82520f8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:40 GMT]] Body:0xc00158f240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1400 TLS:<nil>}
I1217 08:26:40.931148  903852 retry.go:31] will retry after 68.750312ms: Temporary Error: unexpected response code: 503
I1217 08:26:41.004915  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[696f83e4-6986-4a65-a658-bf5d8c8a1acc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:40 GMT]] Body:0xc00171b440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000136000 TLS:<nil>}
I1217 08:26:41.005000  903852 retry.go:31] will retry after 115.742402ms: Temporary Error: unexpected response code: 503
I1217 08:26:41.127740  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[536728fe-0089-41c3-9578-f4adcb509329] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:41 GMT]] Body:0xc00158f340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1540 TLS:<nil>}
I1217 08:26:41.127823  903852 retry.go:31] will retry after 117.77665ms: Temporary Error: unexpected response code: 503
I1217 08:26:41.249184  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8c152b55-6f91-4ba4-9e50-b3b71f4cf38d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:41 GMT]] Body:0xc00171b540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000136140 TLS:<nil>}
I1217 08:26:41.249268  903852 retry.go:31] will retry after 329.552522ms: Temporary Error: unexpected response code: 503
I1217 08:26:41.583069  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c70f4058-cbd2-4fa9-9ac7-680b21453d40] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:41 GMT]] Body:0xc00158f440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1680 TLS:<nil>}
I1217 08:26:41.583130  903852 retry.go:31] will retry after 306.304488ms: Temporary Error: unexpected response code: 503
I1217 08:26:41.894156  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ad2f2d1a-6a4c-49c3-8741-3cdd7b1d3326] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:41 GMT]] Body:0xc000859140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000136280 TLS:<nil>}
I1217 08:26:41.894227  903852 retry.go:31] will retry after 821.096925ms: Temporary Error: unexpected response code: 503
I1217 08:26:42.719266  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1e21aefb-2324-461d-8803-4518c6cce7ea] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:42 GMT]] Body:0xc00158f4c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209680 TLS:<nil>}
I1217 08:26:42.719329  903852 retry.go:31] will retry after 1.145166527s: Temporary Error: unexpected response code: 503
I1217 08:26:43.869071  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aa8b96fd-9bb4-4179-9a0a-41fb8a336787] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:43 GMT]] Body:0xc00171b680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001363c0 TLS:<nil>}
I1217 08:26:43.869166  903852 retry.go:31] will retry after 1.146086038s: Temporary Error: unexpected response code: 503
I1217 08:26:45.018570  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[62dde51b-565e-4497-bdca-64114ab1f903] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:45 GMT]] Body:0xc000859280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000136500 TLS:<nil>}
I1217 08:26:45.018672  903852 retry.go:31] will retry after 3.132477798s: Temporary Error: unexpected response code: 503
I1217 08:26:48.155583  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7c57567a-9008-40d6-aef2-1a844e6d4a20] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:48 GMT]] Body:0xc00158f600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209900 TLS:<nil>}
I1217 08:26:48.155682  903852 retry.go:31] will retry after 4.898875011s: Temporary Error: unexpected response code: 503
I1217 08:26:53.061629  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[da3a280e-76be-480f-836b-79a82ff4440f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:53 GMT]] Body:0xc00171b700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209a40 TLS:<nil>}
I1217 08:26:53.061727  903852 retry.go:31] will retry after 3.994789982s: Temporary Error: unexpected response code: 503
I1217 08:26:57.062082  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b9f71c66-2d1d-4858-a806-28a321d8e42a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:26:57 GMT]] Body:0xc00158f680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c17c0 TLS:<nil>}
I1217 08:26:57.062148  903852 retry.go:31] will retry after 7.670691212s: Temporary Error: unexpected response code: 503
I1217 08:27:04.736377  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f613b02b-35b9-40a1-8441-ea248146562f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:27:04 GMT]] Body:0xc00158f700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209b80 TLS:<nil>}
I1217 08:27:04.736463  903852 retry.go:31] will retry after 17.438168116s: Temporary Error: unexpected response code: 503
I1217 08:27:22.178754  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3ff0033a-cb73-46ff-af46-95a01ebac0c1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:27:22 GMT]] Body:0xc00171b800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209cc0 TLS:<nil>}
I1217 08:27:22.178822  903852 retry.go:31] will retry after 27.823743111s: Temporary Error: unexpected response code: 503
I1217 08:27:50.006497  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5f06c681-402d-43d7-a1eb-412ada4973ae] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:27:49 GMT]] Body:0xc00158f780 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1900 TLS:<nil>}
I1217 08:27:50.006580  903852 retry.go:31] will retry after 29.062693846s: Temporary Error: unexpected response code: 503
I1217 08:28:19.077287  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f10b14e4-3bc6-4866-9a9a-15e119e4df39] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:28:19 GMT]] Body:0xc0014907c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209e00 TLS:<nil>}
I1217 08:28:19.077369  903852 retry.go:31] will retry after 28.543090192s: Temporary Error: unexpected response code: 503
I1217 08:28:47.627243  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e1de57c3-da8d-4567-a9b2-765080f2bba5] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:28:47 GMT]] Body:0xc00158e040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208640 TLS:<nil>}
I1217 08:28:47.627324  903852 retry.go:31] will retry after 59.307263399s: Temporary Error: unexpected response code: 503
I1217 08:29:46.940201  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fed4ef8c-282f-46d3-9b34-443af93e80ca] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:29:46 GMT]] Body:0xc00158e100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0140 TLS:<nil>}
I1217 08:29:46.940280  903852 retry.go:31] will retry after 56.138357952s: Temporary Error: unexpected response code: 503
I1217 08:30:43.082148  903852 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0844486e-f3ee-42af-920a-eac600172157] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 08:30:43 GMT]] Body:0xc000858080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0280 TLS:<nil>}
I1217 08:30:43.082256  903852 retry.go:31] will retry after 1m25.437346301s: Temporary Error: unexpected response code: 503
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-122342 -n functional-122342
helpers_test.go:253: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-122342 logs -n 25: (1.274571418s)
helpers_test.go:261: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-122342 image load --daemon kicbase/echo-server:functional-122342 --alsologtostderr                                                                │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ image          │ functional-122342 image ls                                                                                                                                   │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ image          │ functional-122342 image save kicbase/echo-server:functional-122342 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ image          │ functional-122342 image rm kicbase/echo-server:functional-122342 --alsologtostderr                                                                           │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ image          │ functional-122342 image ls                                                                                                                                   │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ image          │ functional-122342 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ image          │ functional-122342 image ls                                                                                                                                   │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ image          │ functional-122342 image save --daemon kicbase/echo-server:functional-122342 --alsologtostderr                                                                │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ ssh            │ functional-122342 ssh sudo cat /etc/ssl/certs/897277.pem                                                                                                     │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ ssh            │ functional-122342 ssh sudo cat /usr/share/ca-certificates/897277.pem                                                                                         │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ ssh            │ functional-122342 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                     │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ ssh            │ functional-122342 ssh sudo cat /etc/ssl/certs/8972772.pem                                                                                                    │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ ssh            │ functional-122342 ssh sudo cat /usr/share/ca-certificates/8972772.pem                                                                                        │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ ssh            │ functional-122342 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                     │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ ssh            │ functional-122342 ssh sudo cat /etc/test/nested/copy/897277/hosts                                                                                            │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ image          │ functional-122342 image ls --format short --alsologtostderr                                                                                                  │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:29 UTC │ 17 Dec 25 08:29 UTC │
	│ image          │ functional-122342 image ls --format yaml --alsologtostderr                                                                                                   │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:29 UTC │ 17 Dec 25 08:29 UTC │
	│ ssh            │ functional-122342 ssh pgrep buildkitd                                                                                                                        │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:29 UTC │                     │
	│ image          │ functional-122342 image build -t localhost/my-image:functional-122342 testdata/build --alsologtostderr                                                       │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:29 UTC │ 17 Dec 25 08:29 UTC │
	│ image          │ functional-122342 image ls                                                                                                                                   │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:29 UTC │ 17 Dec 25 08:29 UTC │
	│ image          │ functional-122342 image ls --format table --alsologtostderr                                                                                                  │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:29 UTC │ 17 Dec 25 08:29 UTC │
	│ image          │ functional-122342 image ls --format json --alsologtostderr                                                                                                   │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:29 UTC │ 17 Dec 25 08:29 UTC │
	│ update-context │ functional-122342 update-context --alsologtostderr -v=2                                                                                                      │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:29 UTC │ 17 Dec 25 08:29 UTC │
	│ update-context │ functional-122342 update-context --alsologtostderr -v=2                                                                                                      │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:29 UTC │ 17 Dec 25 08:29 UTC │
	│ update-context │ functional-122342 update-context --alsologtostderr -v=2                                                                                                      │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:29 UTC │ 17 Dec 25 08:29 UTC │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:26:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:26:39.388994  903836 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:26:39.389100  903836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:26:39.389109  903836 out.go:374] Setting ErrFile to fd 2...
	I1217 08:26:39.389113  903836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:26:39.389282  903836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 08:26:39.389700  903836 out.go:368] Setting JSON to false
	I1217 08:26:39.390571  903836 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11345,"bootTime":1765948654,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:26:39.390633  903836 start.go:143] virtualization: kvm guest
	I1217 08:26:39.392213  903836 out.go:179] * [functional-122342] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:26:39.393535  903836 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:26:39.393544  903836 notify.go:221] Checking for updates...
	I1217 08:26:39.395416  903836 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:26:39.396707  903836 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	I1217 08:26:39.397697  903836 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 08:26:39.398675  903836 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:26:39.399610  903836 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:26:39.401104  903836 config.go:182] Loaded profile config "functional-122342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:26:39.401836  903836 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:26:39.431419  903836 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 08:26:39.432431  903836 start.go:309] selected driver: kvm2
	I1217 08:26:39.432446  903836 start.go:927] validating driver "kvm2" against &{Name:functional-122342 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-122342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:26:39.432577  903836 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:26:39.433465  903836 cni.go:84] Creating CNI manager for ""
	I1217 08:26:39.433569  903836 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 08:26:39.433640  903836 start.go:353] cluster config:
	{Name:functional-122342 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-122342 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:26:39.434839  903836 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.174156017Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765960300174133467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:243805,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5e01d95-aa50-4604-a30a-977cca3cc230 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.176270192Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b754d14-a50b-4b4c-a524-279696dbaec4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.176839004Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b754d14-a50b-4b4c-a524-279696dbaec4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.177360418Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b60691b33127eb1d69e1d578a2df5194074c8afb6040d0e859e225b2a7faf00b,PodSandboxId:68d561cd832ffedc54088464d71a19192613de8752150781690c77e00d0d3ac3,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765960134445008730,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-g9l2q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2acc9aa8-e16c-4de2-8104-2731803a9cc0,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb72c7423d781bbd6336ed4a40839f09a28f7732d98aa1988a91d149b13d536,PodSandboxId:8162d268f90553392b21f8e4efa0af39292098ae35c0c24e47eae372f6246e9b,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765960002487351390,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db4392ab-d4a8-4176-a0d5-5e79bc4d77ed,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78c0b6faa76255f96dd62af5fd66262225c7dd99481de1050acd67d212d3a5cb,PodSandboxId:132c90e54869af00ad8a1cc4d25020ae19d0c9956e1c1d259f3759dac16cf034,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765959992443509180,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ab11f47f-3405-419b-90f6-1e11c8e5cd9e,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3424101cf26fda518503e05bf7b2d5e7236ea52cdc904f8a164497e3ec7063,PodSandboxId:71e2c74872b5a33003691fff2ed5442545cca707510771d0e0b74eb1516adf7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765959908735934561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zpmv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08fa188e-90ff-4dfe-86df-b40eef36765d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d925826c8f6f8fbf3c96ac2c39bef90c2c64073d2bb8523c2026f14a05cad7e,PodSandboxId:8a40b453568e36ababc4eeadcb175c32b9c97488d4dde5746b6559f9bb078156,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_R
UNNING,CreatedAt:1765959908522452636,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-954rb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daf12b32-6915-43d9-b1b0-c897d53bca11,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbb693f0f59248e6ead291b0e92aba3429534efec95a15bc4cda79405de7355,PodSandboxId:3a827042a4940a14c587f4cec329b8ba40f0107254e0b0ed9748d6b967c4d911,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765959
908453690378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00bd056609476d6ee4f2b20d1d2344e721e30762f27a0687744b8b3486f3149,PodSandboxId:3e1ed5f767039e00e4df38de35f9176c60ec1c8af91b99d507c5d302d1931135,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765959903804915433,Labels:map[str
ing]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba342c5421c70f9a936a60f9dc9b0678,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:238d16a286b7c1e3f843ca9f49efa5e9c395f1d4f26eb3f2285acd0dbeacc6d0,PodSandboxId:82fb78bdc40026c80bf5943e939619fabe51c64ec5f5cacb57dc8ccda1c6bd15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736
d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765959903702953944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300532227cce2617d7152b0b0ce7d38d,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8690c53de65dfa345567ecd5227894357e1b82b998968d2fd4d632439c1e17e3,PodSandboxId:e38614e0cd7b821d98279f976cd02f486d895c06fb834319ae2378e85efee607,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotat
ions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765959903731882170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2d39ca3e768afa6c2876cc33ec430d,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ec936640a50471bba654dfa6356efdd70d3a5729b21c3a8f12de69136db7cf,PodSandboxId:ef656908d2b6599d327cb98383f24340bb090f05d05c11ef96e5e7634482c200,Metadata:&ContainerMetadata
{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765959903722314004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ce8ca7de0471d348f97a4fbf14f4cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:733f311df4b6f138b4447648303dc3ccf991f4465c1a805ce4639457f18e
318c,PodSandboxId:546f8fb776a5c3846f1370782db9e94fd5fcd478121542ff2d33b26798b43225,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765959864347695593,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zpmv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08fa188e-90ff-4dfe-86df-b40eef36765d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness
-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26dd379d79c851d78e49afda695d69aa1a1496132def151ccc64aca74a55ba04,PodSandboxId:d57f6fa06bf8adabfb271001e8803287eb8ffadb89098032e8d2a68c25ee6f1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765959863911689903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-954rb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daf12b32-6915-43d9-b1b0-c897d53bca11,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba1678e8c75f270f557406de1ea4d54c058405d51078888b5a24b251f0680b2,PodSandboxId:37756fdf1a3717047c0fd184ce9a77415ea73d8ba538e154786cde5d6f4806c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765959863932722534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5b186eccdc4e422c6f76bbba5bc0ef1eb32b926601973fa064f3bcadfc7b7c,PodSandboxId:34d0183982ab8397230f2901ed1e419778a678d9a64e966f6f60c06d4840cb51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765959860173761568,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ce8ca7de0471d348f97a4fbf14f4cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kub
ernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91e15e9406ddd182134e86ad3b00b12cb5607dc046505f7a127894a576e9419,PodSandboxId:65d7100394df905a2f9450b95c42db3ef1ee66205b82099018b319f4a048e21d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765959860175676559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-122342,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 3e2d39ca3e768afa6c2876cc33ec430d,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77651de2ba10aee37eab124a6b2a972669382d5aa8f7ca2419c76bc2a15bff59,PodSandboxId:ff0e203b9a657d1d2b8e9034a0d8f8948a408b2d463c68ba29cacc577e6b0184,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765959860074675831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba342c5421c70f9a936a60f9dc9b0678,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b754d14-a50b-4b4c-a524-279696dbaec4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.215145199Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aee944c5-46d9-4036-9e59-ba2dd88f5fce name=/runtime.v1.RuntimeService/Version
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.215377754Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aee944c5-46d9-4036-9e59-ba2dd88f5fce name=/runtime.v1.RuntimeService/Version
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.216508791Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=499ccd5e-59a7-4718-af99-2670213ac5ab name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.218111797Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765960300218087415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:243805,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=499ccd5e-59a7-4718-af99-2670213ac5ab name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.219044224Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6290e4c9-0892-40c6-841f-029bb59d8cce name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.219114557Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6290e4c9-0892-40c6-841f-029bb59d8cce name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.219520018Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b60691b33127eb1d69e1d578a2df5194074c8afb6040d0e859e225b2a7faf00b,PodSandboxId:68d561cd832ffedc54088464d71a19192613de8752150781690c77e00d0d3ac3,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765960134445008730,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-g9l2q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2acc9aa8-e16c-4de2-8104-2731803a9cc0,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb72c7423d781bbd6336ed4a40839f09a28f7732d98aa1988a91d149b13d536,PodSandboxId:8162d268f90553392b21f8e4efa0af39292098ae35c0c24e47eae372f6246e9b,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765960002487351390,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db4392ab-d4a8-4176-a0d5-5e79bc4d77ed,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78c0b6faa76255f96dd62af5fd66262225c7dd99481de1050acd67d212d3a5cb,PodSandboxId:132c90e54869af00ad8a1cc4d25020ae19d0c9956e1c1d259f3759dac16cf034,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765959992443509180,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ab11f47f-3405-419b-90f6-1e11c8e5cd9e,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3424101cf26fda518503e05bf7b2d5e7236ea52cdc904f8a164497e3ec7063,PodSandboxId:71e2c74872b5a33003691fff2ed5442545cca707510771d0e0b74eb1516adf7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765959908735934561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zpmv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08fa188e-90ff-4dfe-86df-b40eef36765d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d925826c8f6f8fbf3c96ac2c39bef90c2c64073d2bb8523c2026f14a05cad7e,PodSandboxId:8a40b453568e36ababc4eeadcb175c32b9c97488d4dde5746b6559f9bb078156,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_R
UNNING,CreatedAt:1765959908522452636,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-954rb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daf12b32-6915-43d9-b1b0-c897d53bca11,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbb693f0f59248e6ead291b0e92aba3429534efec95a15bc4cda79405de7355,PodSandboxId:3a827042a4940a14c587f4cec329b8ba40f0107254e0b0ed9748d6b967c4d911,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765959
908453690378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00bd056609476d6ee4f2b20d1d2344e721e30762f27a0687744b8b3486f3149,PodSandboxId:3e1ed5f767039e00e4df38de35f9176c60ec1c8af91b99d507c5d302d1931135,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765959903804915433,Labels:map[str
ing]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba342c5421c70f9a936a60f9dc9b0678,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:238d16a286b7c1e3f843ca9f49efa5e9c395f1d4f26eb3f2285acd0dbeacc6d0,PodSandboxId:82fb78bdc40026c80bf5943e939619fabe51c64ec5f5cacb57dc8ccda1c6bd15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736
d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765959903702953944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300532227cce2617d7152b0b0ce7d38d,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8690c53de65dfa345567ecd5227894357e1b82b998968d2fd4d632439c1e17e3,PodSandboxId:e38614e0cd7b821d98279f976cd02f486d895c06fb834319ae2378e85efee607,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotat
ions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765959903731882170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2d39ca3e768afa6c2876cc33ec430d,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ec936640a50471bba654dfa6356efdd70d3a5729b21c3a8f12de69136db7cf,PodSandboxId:ef656908d2b6599d327cb98383f24340bb090f05d05c11ef96e5e7634482c200,Metadata:&ContainerMetadata
{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765959903722314004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ce8ca7de0471d348f97a4fbf14f4cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:733f311df4b6f138b4447648303dc3ccf991f4465c1a805ce4639457f18e
318c,PodSandboxId:546f8fb776a5c3846f1370782db9e94fd5fcd478121542ff2d33b26798b43225,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765959864347695593,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zpmv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08fa188e-90ff-4dfe-86df-b40eef36765d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness
-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26dd379d79c851d78e49afda695d69aa1a1496132def151ccc64aca74a55ba04,PodSandboxId:d57f6fa06bf8adabfb271001e8803287eb8ffadb89098032e8d2a68c25ee6f1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765959863911689903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-954rb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daf12b32-6915-43d9-b1b0-c897d53bca11,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba1678e8c75f270f557406de1ea4d54c058405d51078888b5a24b251f0680b2,PodSandboxId:37756fdf1a3717047c0fd184ce9a77415ea73d8ba538e154786cde5d6f4806c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765959863932722534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5b186eccdc4e422c6f76bbba5bc0ef1eb32b926601973fa064f3bcadfc7b7c,PodSandboxId:34d0183982ab8397230f2901ed1e419778a678d9a64e966f6f60c06d4840cb51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765959860173761568,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ce8ca7de0471d348f97a4fbf14f4cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kub
ernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91e15e9406ddd182134e86ad3b00b12cb5607dc046505f7a127894a576e9419,PodSandboxId:65d7100394df905a2f9450b95c42db3ef1ee66205b82099018b319f4a048e21d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765959860175676559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-122342,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 3e2d39ca3e768afa6c2876cc33ec430d,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77651de2ba10aee37eab124a6b2a972669382d5aa8f7ca2419c76bc2a15bff59,PodSandboxId:ff0e203b9a657d1d2b8e9034a0d8f8948a408b2d463c68ba29cacc577e6b0184,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765959860074675831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba342c5421c70f9a936a60f9dc9b0678,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6290e4c9-0892-40c6-841f-029bb59d8cce name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.248957216Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2cf6fed9-5bff-4e49-9a70-27e38f717e81 name=/runtime.v1.RuntimeService/Version
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.249036571Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2cf6fed9-5bff-4e49-9a70-27e38f717e81 name=/runtime.v1.RuntimeService/Version
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.250598274Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d8e7521-d6db-429a-a7dd-7071e214179e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.251349189Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765960300251319850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:243805,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d8e7521-d6db-429a-a7dd-7071e214179e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.252097487Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=924c12f1-fa54-4e66-9bcd-a5c90459d445 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.252169671Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=924c12f1-fa54-4e66-9bcd-a5c90459d445 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.252551694Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b60691b33127eb1d69e1d578a2df5194074c8afb6040d0e859e225b2a7faf00b,PodSandboxId:68d561cd832ffedc54088464d71a19192613de8752150781690c77e00d0d3ac3,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765960134445008730,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-g9l2q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2acc9aa8-e16c-4de2-8104-2731803a9cc0,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb72c7423d781bbd6336ed4a40839f09a28f7732d98aa1988a91d149b13d536,PodSandboxId:8162d268f90553392b21f8e4efa0af39292098ae35c0c24e47eae372f6246e9b,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765960002487351390,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db4392ab-d4a8-4176-a0d5-5e79bc4d77ed,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78c0b6faa76255f96dd62af5fd66262225c7dd99481de1050acd67d212d3a5cb,PodSandboxId:132c90e54869af00ad8a1cc4d25020ae19d0c9956e1c1d259f3759dac16cf034,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765959992443509180,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ab11f47f-3405-419b-90f6-1e11c8e5cd9e,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3424101cf26fda518503e05bf7b2d5e7236ea52cdc904f8a164497e3ec7063,PodSandboxId:71e2c74872b5a33003691fff2ed5442545cca707510771d0e0b74eb1516adf7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765959908735934561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zpmv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08fa188e-90ff-4dfe-86df-b40eef36765d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d925826c8f6f8fbf3c96ac2c39bef90c2c64073d2bb8523c2026f14a05cad7e,PodSandboxId:8a40b453568e36ababc4eeadcb175c32b9c97488d4dde5746b6559f9bb078156,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_R
UNNING,CreatedAt:1765959908522452636,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-954rb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daf12b32-6915-43d9-b1b0-c897d53bca11,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbb693f0f59248e6ead291b0e92aba3429534efec95a15bc4cda79405de7355,PodSandboxId:3a827042a4940a14c587f4cec329b8ba40f0107254e0b0ed9748d6b967c4d911,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765959
908453690378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00bd056609476d6ee4f2b20d1d2344e721e30762f27a0687744b8b3486f3149,PodSandboxId:3e1ed5f767039e00e4df38de35f9176c60ec1c8af91b99d507c5d302d1931135,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765959903804915433,Labels:map[str
ing]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba342c5421c70f9a936a60f9dc9b0678,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:238d16a286b7c1e3f843ca9f49efa5e9c395f1d4f26eb3f2285acd0dbeacc6d0,PodSandboxId:82fb78bdc40026c80bf5943e939619fabe51c64ec5f5cacb57dc8ccda1c6bd15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736
d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765959903702953944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300532227cce2617d7152b0b0ce7d38d,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8690c53de65dfa345567ecd5227894357e1b82b998968d2fd4d632439c1e17e3,PodSandboxId:e38614e0cd7b821d98279f976cd02f486d895c06fb834319ae2378e85efee607,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotat
ions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765959903731882170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2d39ca3e768afa6c2876cc33ec430d,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ec936640a50471bba654dfa6356efdd70d3a5729b21c3a8f12de69136db7cf,PodSandboxId:ef656908d2b6599d327cb98383f24340bb090f05d05c11ef96e5e7634482c200,Metadata:&ContainerMetadata
{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765959903722314004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ce8ca7de0471d348f97a4fbf14f4cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:733f311df4b6f138b4447648303dc3ccf991f4465c1a805ce4639457f18e
318c,PodSandboxId:546f8fb776a5c3846f1370782db9e94fd5fcd478121542ff2d33b26798b43225,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765959864347695593,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zpmv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08fa188e-90ff-4dfe-86df-b40eef36765d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness
-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26dd379d79c851d78e49afda695d69aa1a1496132def151ccc64aca74a55ba04,PodSandboxId:d57f6fa06bf8adabfb271001e8803287eb8ffadb89098032e8d2a68c25ee6f1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765959863911689903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-954rb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daf12b32-6915-43d9-b1b0-c897d53bca11,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba1678e8c75f270f557406de1ea4d54c058405d51078888b5a24b251f0680b2,PodSandboxId:37756fdf1a3717047c0fd184ce9a77415ea73d8ba538e154786cde5d6f4806c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765959863932722534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5b186eccdc4e422c6f76bbba5bc0ef1eb32b926601973fa064f3bcadfc7b7c,PodSandboxId:34d0183982ab8397230f2901ed1e419778a678d9a64e966f6f60c06d4840cb51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765959860173761568,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ce8ca7de0471d348f97a4fbf14f4cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kub
ernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91e15e9406ddd182134e86ad3b00b12cb5607dc046505f7a127894a576e9419,PodSandboxId:65d7100394df905a2f9450b95c42db3ef1ee66205b82099018b319f4a048e21d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765959860175676559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-122342,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 3e2d39ca3e768afa6c2876cc33ec430d,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77651de2ba10aee37eab124a6b2a972669382d5aa8f7ca2419c76bc2a15bff59,PodSandboxId:ff0e203b9a657d1d2b8e9034a0d8f8948a408b2d463c68ba29cacc577e6b0184,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765959860074675831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba342c5421c70f9a936a60f9dc9b0678,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=924c12f1-fa54-4e66-9bcd-a5c90459d445 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.289462489Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81356b92-57be-45a1-96b2-743c4bdc83ac name=/runtime.v1.RuntimeService/Version
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.289719484Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81356b92-57be-45a1-96b2-743c4bdc83ac name=/runtime.v1.RuntimeService/Version
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.291514472Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21cfd9f6-d2ba-4711-bcb9-95c86733e6ee name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.292554340Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765960300292531936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:243805,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21cfd9f6-d2ba-4711-bcb9-95c86733e6ee name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.293477023Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8fda22bd-af7c-4fa8-b34c-da408010da64 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.293601958Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8fda22bd-af7c-4fa8-b34c-da408010da64 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:31:40 functional-122342 crio[5246]: time="2025-12-17 08:31:40.293995733Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b60691b33127eb1d69e1d578a2df5194074c8afb6040d0e859e225b2a7faf00b,PodSandboxId:68d561cd832ffedc54088464d71a19192613de8752150781690c77e00d0d3ac3,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765960134445008730,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-g9l2q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2acc9aa8-e16c-4de2-8104-2731803a9cc0,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb72c7423d781bbd6336ed4a40839f09a28f7732d98aa1988a91d149b13d536,PodSandboxId:8162d268f90553392b21f8e4efa0af39292098ae35c0c24e47eae372f6246e9b,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765960002487351390,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db4392ab-d4a8-4176-a0d5-5e79bc4d77ed,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78c0b6faa76255f96dd62af5fd66262225c7dd99481de1050acd67d212d3a5cb,PodSandboxId:132c90e54869af00ad8a1cc4d25020ae19d0c9956e1c1d259f3759dac16cf034,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765959992443509180,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ab11f47f-3405-419b-90f6-1e11c8e5cd9e,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3424101cf26fda518503e05bf7b2d5e7236ea52cdc904f8a164497e3ec7063,PodSandboxId:71e2c74872b5a33003691fff2ed5442545cca707510771d0e0b74eb1516adf7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765959908735934561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zpmv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08fa188e-90ff-4dfe-86df-b40eef36765d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d925826c8f6f8fbf3c96ac2c39bef90c2c64073d2bb8523c2026f14a05cad7e,PodSandboxId:8a40b453568e36ababc4eeadcb175c32b9c97488d4dde5746b6559f9bb078156,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_R
UNNING,CreatedAt:1765959908522452636,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-954rb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daf12b32-6915-43d9-b1b0-c897d53bca11,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbb693f0f59248e6ead291b0e92aba3429534efec95a15bc4cda79405de7355,PodSandboxId:3a827042a4940a14c587f4cec329b8ba40f0107254e0b0ed9748d6b967c4d911,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765959
908453690378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00bd056609476d6ee4f2b20d1d2344e721e30762f27a0687744b8b3486f3149,PodSandboxId:3e1ed5f767039e00e4df38de35f9176c60ec1c8af91b99d507c5d302d1931135,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765959903804915433,Labels:map[str
ing]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba342c5421c70f9a936a60f9dc9b0678,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:238d16a286b7c1e3f843ca9f49efa5e9c395f1d4f26eb3f2285acd0dbeacc6d0,PodSandboxId:82fb78bdc40026c80bf5943e939619fabe51c64ec5f5cacb57dc8ccda1c6bd15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736
d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765959903702953944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300532227cce2617d7152b0b0ce7d38d,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8690c53de65dfa345567ecd5227894357e1b82b998968d2fd4d632439c1e17e3,PodSandboxId:e38614e0cd7b821d98279f976cd02f486d895c06fb834319ae2378e85efee607,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotat
ions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765959903731882170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2d39ca3e768afa6c2876cc33ec430d,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ec936640a50471bba654dfa6356efdd70d3a5729b21c3a8f12de69136db7cf,PodSandboxId:ef656908d2b6599d327cb98383f24340bb090f05d05c11ef96e5e7634482c200,Metadata:&ContainerMetadata
{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765959903722314004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ce8ca7de0471d348f97a4fbf14f4cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:733f311df4b6f138b4447648303dc3ccf991f4465c1a805ce4639457f18e
318c,PodSandboxId:546f8fb776a5c3846f1370782db9e94fd5fcd478121542ff2d33b26798b43225,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765959864347695593,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zpmv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08fa188e-90ff-4dfe-86df-b40eef36765d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness
-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26dd379d79c851d78e49afda695d69aa1a1496132def151ccc64aca74a55ba04,PodSandboxId:d57f6fa06bf8adabfb271001e8803287eb8ffadb89098032e8d2a68c25ee6f1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765959863911689903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-954rb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daf12b32-6915-43d9-b1b0-c897d53bca11,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba1678e8c75f270f557406de1ea4d54c058405d51078888b5a24b251f0680b2,PodSandboxId:37756fdf1a3717047c0fd184ce9a77415ea73d8ba538e154786cde5d6f4806c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765959863932722534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5b186eccdc4e422c6f76bbba5bc0ef1eb32b926601973fa064f3bcadfc7b7c,PodSandboxId:34d0183982ab8397230f2901ed1e419778a678d9a64e966f6f60c06d4840cb51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765959860173761568,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ce8ca7de0471d348f97a4fbf14f4cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kub
ernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91e15e9406ddd182134e86ad3b00b12cb5607dc046505f7a127894a576e9419,PodSandboxId:65d7100394df905a2f9450b95c42db3ef1ee66205b82099018b319f4a048e21d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765959860175676559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-122342,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 3e2d39ca3e768afa6c2876cc33ec430d,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77651de2ba10aee37eab124a6b2a972669382d5aa8f7ca2419c76bc2a15bff59,PodSandboxId:ff0e203b9a657d1d2b8e9034a0d8f8948a408b2d463c68ba29cacc577e6b0184,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765959860074675831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba342c5421c70f9a936a60f9dc9b0678,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8fda22bd-af7c-4fa8-b34c-da408010da64 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                         CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b60691b33127e       public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036   2 minutes ago       Running             mysql                     0                   68d561cd832ff       mysql-6bcdcbc558-g9l2q                      default
	deb72c7423d78       a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c                                              4 minutes ago       Running             myfrontend                0                   8162d268f9055       sp-pod                                      default
	78c0b6faa7625       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e           5 minutes ago       Exited              mount-munger              0                   132c90e54869a       busybox-mount                               default
	4c3424101cf26       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                              6 minutes ago       Running             coredns                   2                   71e2c74872b5a       coredns-66bc5c9577-zpmv6                    kube-system
	3d925826c8f6f       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                              6 minutes ago       Running             kube-proxy                2                   8a40b453568e3       kube-proxy-954rb                            kube-system
	3dbb693f0f592       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              6 minutes ago       Running             storage-provisioner       2                   3a827042a4940       storage-provisioner                         kube-system
	a00bd05660947       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              6 minutes ago       Running             etcd                      2                   3e1ed5f767039       etcd-functional-122342                      kube-system
	8690c53de65df       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                              6 minutes ago       Running             kube-controller-manager   2                   e38614e0cd7b8       kube-controller-manager-functional-122342   kube-system
	a1ec936640a50       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                              6 minutes ago       Running             kube-scheduler            2                   ef656908d2b65       kube-scheduler-functional-122342            kube-system
	238d16a286b7c       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                              6 minutes ago       Running             kube-apiserver            0                   82fb78bdc4002       kube-apiserver-functional-122342            kube-system
	733f311df4b6f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                              7 minutes ago       Exited              coredns                   1                   546f8fb776a5c       coredns-66bc5c9577-zpmv6                    kube-system
	dba1678e8c75f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              7 minutes ago       Exited              storage-provisioner       1                   37756fdf1a371       storage-provisioner                         kube-system
	26dd379d79c85       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                              7 minutes ago       Exited              kube-proxy                1                   d57f6fa06bf8a       kube-proxy-954rb                            kube-system
	b91e15e9406dd       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                              7 minutes ago       Exited              kube-controller-manager   1                   65d7100394df9       kube-controller-manager-functional-122342   kube-system
	8e5b186eccdc4       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                              7 minutes ago       Exited              kube-scheduler            1                   34d0183982ab8       kube-scheduler-functional-122342            kube-system
	77651de2ba10a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              7 minutes ago       Exited              etcd                      1                   ff0e203b9a657       etcd-functional-122342                      kube-system
	
	
	==> coredns [4c3424101cf26fda518503e05bf7b2d5e7236ea52cdc904f8a164497e3ec7063] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35826 - 7078 "HINFO IN 971251318359296826.1105888965815954832. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.056436496s
	
	
	==> coredns [733f311df4b6f138b4447648303dc3ccf991f4465c1a805ce4639457f18e318c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34815 - 36151 "HINFO IN 8693930430730607389.7737780181467506304. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.110641538s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-122342
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-122342
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=functional-122342
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_23_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:23:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-122342
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:31:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:29:32 +0000   Wed, 17 Dec 2025 08:23:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:29:32 +0000   Wed, 17 Dec 2025 08:23:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:29:32 +0000   Wed, 17 Dec 2025 08:23:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 08:29:32 +0000   Wed, 17 Dec 2025 08:23:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    functional-122342
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 61d98f57dbb24a4cba49d15398ceb72c
	  System UUID:                61d98f57-dbb2-4a4c-ba49-d15398ceb72c
	  Boot ID:                    5e8a90d3-1ee9-49a4-ade8-286a96f0d59c
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-76g5z                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  default                     hello-node-connect-7d85dfc575-b8nbt           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  default                     mysql-6bcdcbc558-g9l2q                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    4m45s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 coredns-66bc5c9577-zpmv6                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m49s
	  kube-system                 etcd-functional-122342                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m55s
	  kube-system                 kube-apiserver-functional-122342              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-controller-manager-functional-122342     200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m56s
	  kube-system                 kube-proxy-954rb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m50s
	  kube-system                 kube-scheduler-functional-122342              100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m54s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m47s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-xb94z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wzrhk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m47s                  kube-proxy       
	  Normal  Starting                 6m31s                  kube-proxy       
	  Normal  Starting                 7m16s                  kube-proxy       
	  Normal  Starting                 8m2s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m2s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m1s (x8 over 8m2s)    kubelet          Node functional-122342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m1s (x8 over 8m2s)    kubelet          Node functional-122342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m1s (x7 over 8m2s)    kubelet          Node functional-122342 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     7m54s                  kubelet          Node functional-122342 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m54s                  kubelet          Node functional-122342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m54s                  kubelet          Node functional-122342 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                7m54s                  kubelet          Node functional-122342 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  7m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m50s                  node-controller  Node functional-122342 event: Registered Node functional-122342 in Controller
	  Normal  NodeHasSufficientPID     7m21s (x7 over 7m21s)  kubelet          Node functional-122342 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m21s (x8 over 7m21s)  kubelet          Node functional-122342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m21s (x8 over 7m21s)  kubelet          Node functional-122342 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m14s                  node-controller  Node functional-122342 event: Registered Node functional-122342 in Controller
	  Normal  Starting                 6m38s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m37s (x8 over 6m38s)  kubelet          Node functional-122342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m37s (x8 over 6m38s)  kubelet          Node functional-122342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m37s (x7 over 6m38s)  kubelet          Node functional-122342 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m30s                  node-controller  Node functional-122342 event: Registered Node functional-122342 in Controller
	
	
	==> dmesg <==
	[  +0.000477] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.162886] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.082563] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.092465] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.130289] kauditd_printk_skb: 171 callbacks suppressed
	[  +6.143001] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.029686] kauditd_printk_skb: 254 callbacks suppressed
	[Dec17 08:24] kauditd_printk_skb: 39 callbacks suppressed
	[  +6.101094] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.563991] kauditd_printk_skb: 176 callbacks suppressed
	[ +14.146643] kauditd_printk_skb: 137 callbacks suppressed
	[  +0.111072] kauditd_printk_skb: 12 callbacks suppressed
	[Dec17 08:25] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.541699] kauditd_printk_skb: 168 callbacks suppressed
	[  +4.464470] kauditd_printk_skb: 133 callbacks suppressed
	[  +0.656489] kauditd_printk_skb: 169 callbacks suppressed
	[  +0.000268] kauditd_printk_skb: 32 callbacks suppressed
	[Dec17 08:26] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.838422] kauditd_printk_skb: 46 callbacks suppressed
	[ +13.673701] kauditd_printk_skb: 145 callbacks suppressed
	[Dec17 08:27] kauditd_printk_skb: 38 callbacks suppressed
	[Dec17 08:29] crun[9808]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +1.884545] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [77651de2ba10aee37eab124a6b2a972669382d5aa8f7ca2419c76bc2a15bff59] <==
	{"level":"warn","ts":"2025-12-17T08:24:22.464227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:22.474641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:22.485571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:22.489697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:22.497007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:22.504624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:22.581021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39754","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T08:24:46.378683Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-17T08:24:46.378825Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-122342","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"]}
	{"level":"error","ts":"2025-12-17T08:24:46.379563Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T08:24:46.462637Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T08:24:46.462691Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T08:24:46.462708Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f61fae125a956d36","current-leader-member-id":"f61fae125a956d36"}
	{"level":"info","ts":"2025-12-17T08:24:46.462758Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-17T08:24:46.462745Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-17T08:24:46.462804Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T08:24:46.462869Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T08:24:46.462876Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-17T08:24:46.462913Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.97:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T08:24:46.462920Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.97:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T08:24:46.462924Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.97:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T08:24:46.466216Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"error","ts":"2025-12-17T08:24:46.466294Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.97:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T08:24:46.466316Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2025-12-17T08:24:46.466322Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-122342","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"]}
	
	
	==> etcd [a00bd056609476d6ee4f2b20d1d2344e721e30762f27a0687744b8b3486f3149] <==
	{"level":"warn","ts":"2025-12-17T08:28:51.656984Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T08:28:51.180468Z","time spent":"476.381472ms","remote":"127.0.0.1:35770","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-12-17T08:28:51.657276Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"341.368723ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-12-17T08:28:51.658521Z","caller":"traceutil/trace.go:172","msg":"trace[1165435058] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1049; }","duration":"342.653747ms","start":"2025-12-17T08:28:51.315857Z","end":"2025-12-17T08:28:51.658511Z","steps":["trace[1165435058] 'agreement among raft nodes before linearized reading'  (duration: 341.317634ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T08:28:51.662434Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T08:28:51.315843Z","time spent":"346.570089ms","remote":"127.0.0.1:35730","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":1,"response size":1140,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 "}
	{"level":"warn","ts":"2025-12-17T08:28:51.657464Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"179.105953ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T08:28:51.662566Z","caller":"traceutil/trace.go:172","msg":"trace[1597772687] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1049; }","duration":"184.207291ms","start":"2025-12-17T08:28:51.478351Z","end":"2025-12-17T08:28:51.662558Z","steps":["trace[1597772687] 'agreement among raft nodes before linearized reading'  (duration: 179.100409ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:28:52.517514Z","caller":"traceutil/trace.go:172","msg":"trace[861961697] transaction","detail":"{read_only:false; response_revision:1051; number_of_response:1; }","duration":"151.008462ms","start":"2025-12-17T08:28:52.366492Z","end":"2025-12-17T08:28:52.517500Z","steps":["trace[861961697] 'process raft request'  (duration: 150.92497ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:28:54.062470Z","caller":"traceutil/trace.go:172","msg":"trace[1832892770] linearizableReadLoop","detail":"{readStateIndex:1167; appliedIndex:1167; }","duration":"238.978268ms","start":"2025-12-17T08:28:53.823479Z","end":"2025-12-17T08:28:54.062457Z","steps":["trace[1832892770] 'read index received'  (duration: 238.973921ms)","trace[1832892770] 'applied index is now lower than readState.Index'  (duration: 3.526µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T08:28:54.062631Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"239.14136ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/hello-node-75c85bcc94-76g5z.1881f3418981d978\" limit:1 ","response":"range_response_count:1 size:775"}
	{"level":"info","ts":"2025-12-17T08:28:54.062654Z","caller":"traceutil/trace.go:172","msg":"trace[1329154762] range","detail":"{range_begin:/registry/events/default/hello-node-75c85bcc94-76g5z.1881f3418981d978; range_end:; response_count:1; response_revision:1051; }","duration":"239.174481ms","start":"2025-12-17T08:28:53.823472Z","end":"2025-12-17T08:28:54.062646Z","steps":["trace[1329154762] 'agreement among raft nodes before linearized reading'  (duration: 239.049636ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:28:54.062668Z","caller":"traceutil/trace.go:172","msg":"trace[1395157181] transaction","detail":"{read_only:false; response_revision:1052; number_of_response:1; }","duration":"354.645285ms","start":"2025-12-17T08:28:53.708012Z","end":"2025-12-17T08:28:54.062658Z","steps":["trace[1395157181] 'process raft request'  (duration: 354.513528ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T08:28:54.062754Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T08:28:53.707996Z","time spent":"354.70909ms","remote":"127.0.0.1:35730","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1050 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-12-17T08:28:54.062947Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"237.097229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/hello-node-75c85bcc94-76g5z\" limit:1 ","response":"range_response_count:1 size:3273"}
	{"level":"info","ts":"2025-12-17T08:28:54.062995Z","caller":"traceutil/trace.go:172","msg":"trace[784849255] range","detail":"{range_begin:/registry/pods/default/hello-node-75c85bcc94-76g5z; range_end:; response_count:1; response_revision:1052; }","duration":"237.145054ms","start":"2025-12-17T08:28:53.825841Z","end":"2025-12-17T08:28:54.062986Z","steps":["trace[784849255] 'agreement among raft nodes before linearized reading'  (duration: 237.031568ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:28:54.250981Z","caller":"traceutil/trace.go:172","msg":"trace[1309328681] linearizableReadLoop","detail":"{readStateIndex:1168; appliedIndex:1168; }","duration":"182.460576ms","start":"2025-12-17T08:28:54.068468Z","end":"2025-12-17T08:28:54.250928Z","steps":["trace[1309328681] 'read index received'  (duration: 182.454653ms)","trace[1309328681] 'applied index is now lower than readState.Index'  (duration: 5.293µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T08:28:54.257284Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"188.762292ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T08:28:54.257320Z","caller":"traceutil/trace.go:172","msg":"trace[625439943] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1052; }","duration":"188.846255ms","start":"2025-12-17T08:28:54.068465Z","end":"2025-12-17T08:28:54.257311Z","steps":["trace[625439943] 'agreement among raft nodes before linearized reading'  (duration: 182.59633ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:28:54.258040Z","caller":"traceutil/trace.go:172","msg":"trace[1866825117] transaction","detail":"{read_only:false; response_revision:1053; number_of_response:1; }","duration":"190.03831ms","start":"2025-12-17T08:28:54.067988Z","end":"2025-12-17T08:28:54.258027Z","steps":["trace[1866825117] 'process raft request'  (duration: 183.10723ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:28:54.258981Z","caller":"traceutil/trace.go:172","msg":"trace[1580778762] transaction","detail":"{read_only:false; response_revision:1054; number_of_response:1; }","duration":"184.319892ms","start":"2025-12-17T08:28:54.074603Z","end":"2025-12-17T08:28:54.258923Z","steps":["trace[1580778762] 'process raft request'  (duration: 183.036261ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:28:56.251532Z","caller":"traceutil/trace.go:172","msg":"trace[1846296219] transaction","detail":"{read_only:false; response_revision:1064; number_of_response:1; }","duration":"177.396117ms","start":"2025-12-17T08:28:56.074122Z","end":"2025-12-17T08:28:56.251518Z","steps":["trace[1846296219] 'process raft request'  (duration: 177.294056ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:29:00.663369Z","caller":"traceutil/trace.go:172","msg":"trace[1032096049] linearizableReadLoop","detail":"{readStateIndex:1184; appliedIndex:1184; }","duration":"186.148761ms","start":"2025-12-17T08:29:00.477206Z","end":"2025-12-17T08:29:00.663355Z","steps":["trace[1032096049] 'read index received'  (duration: 185.944377ms)","trace[1032096049] 'applied index is now lower than readState.Index'  (duration: 203.462µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T08:29:00.663674Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"186.41735ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T08:29:00.663715Z","caller":"traceutil/trace.go:172","msg":"trace[215806706] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1066; }","duration":"186.508168ms","start":"2025-12-17T08:29:00.477201Z","end":"2025-12-17T08:29:00.663709Z","steps":["trace[215806706] 'agreement among raft nodes before linearized reading'  (duration: 186.398701ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:29:00.664872Z","caller":"traceutil/trace.go:172","msg":"trace[491599279] transaction","detail":"{read_only:false; response_revision:1067; number_of_response:1; }","duration":"388.059975ms","start":"2025-12-17T08:29:00.276802Z","end":"2025-12-17T08:29:00.664862Z","steps":["trace[491599279] 'process raft request'  (duration: 387.242816ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T08:29:00.664955Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T08:29:00.276787Z","time spent":"388.126689ms","remote":"127.0.0.1:35730","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1065 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 08:31:40 up 8 min,  0 users,  load average: 0.10, 0.29, 0.19
	Linux functional-122342 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 03:41:16 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [238d16a286b7c1e3f843ca9f49efa5e9c395f1d4f26eb3f2285acd0dbeacc6d0] <==
	I1217 08:25:07.375049       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 08:25:07.386089       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 08:25:07.401170       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 08:25:07.894973       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 08:25:08.174715       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 08:25:09.063315       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 08:25:09.095941       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 08:25:09.123903       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 08:25:09.131123       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 08:25:10.832704       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 08:25:10.982464       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 08:25:11.030473       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 08:25:24.679049       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.243.214"}
	I1217 08:25:29.060295       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.184.126"}
	I1217 08:25:29.172335       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.238.194"}
	I1217 08:26:40.281622       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 08:26:40.592487       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.16.212"}
	I1217 08:26:40.610052       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.16.54"}
	E1217 08:26:41.114362       1 conn.go:339] Error on socket receive: read tcp 192.168.39.97:8441->192.168.39.1:33808: use of closed network connection
	E1217 08:26:48.081135       1 conn.go:339] Error on socket receive: read tcp 192.168.39.97:8441->192.168.39.1:47744: use of closed network connection
	I1217 08:26:55.947742       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.109.119.15"}
	E1217 08:29:01.166218       1 conn.go:339] Error on socket receive: read tcp 192.168.39.97:8441->192.168.39.1:39382: use of closed network connection
	E1217 08:29:02.572157       1 conn.go:339] Error on socket receive: read tcp 192.168.39.97:8441->192.168.39.1:39398: use of closed network connection
	E1217 08:29:04.132135       1 conn.go:339] Error on socket receive: read tcp 192.168.39.97:8441->192.168.39.1:39420: use of closed network connection
	E1217 08:29:07.155921       1 conn.go:339] Error on socket receive: read tcp 192.168.39.97:8441->192.168.39.1:39434: use of closed network connection
	
	
	==> kube-controller-manager [8690c53de65dfa345567ecd5227894357e1b82b998968d2fd4d632439c1e17e3] <==
	I1217 08:25:10.637349       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 08:25:10.637973       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 08:25:10.639729       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 08:25:10.647707       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 08:25:10.647775       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 08:25:10.647786       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 08:25:10.647791       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 08:25:10.655101       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 08:25:10.655161       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 08:25:10.655574       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 08:25:10.659343       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 08:25:10.669943       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 08:25:10.678670       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 08:25:10.680003       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 08:25:10.683337       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 08:25:10.683405       1 shared_informer.go:356] "Caches are synced" controller="service account"
	E1217 08:26:40.373827       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:26:40.384790       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:26:40.397861       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:26:40.402313       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:26:40.406452       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:26:40.406561       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:26:40.421002       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:26:40.430776       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:26:40.430927       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [b91e15e9406ddd182134e86ad3b00b12cb5607dc046505f7a127894a576e9419] <==
	I1217 08:24:26.572294       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 08:24:26.572430       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1217 08:24:26.573601       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 08:24:26.573682       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 08:24:26.573728       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 08:24:26.574700       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 08:24:26.574745       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 08:24:26.574899       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 08:24:26.574984       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 08:24:26.575056       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 08:24:26.576510       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 08:24:26.577734       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 08:24:26.597033       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 08:24:26.597079       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 08:24:26.597097       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 08:24:26.597102       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 08:24:26.597106       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 08:24:26.600496       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 08:24:26.604810       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 08:24:26.609869       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 08:24:26.609987       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1217 08:24:26.621912       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 08:24:26.621925       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 08:24:26.621928       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 08:24:26.621930       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [26dd379d79c851d78e49afda695d69aa1a1496132def151ccc64aca74a55ba04] <==
	I1217 08:24:24.306948       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 08:24:24.408753       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 08:24:24.408959       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.97"]
	E1217 08:24:24.409406       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:24:24.472565       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 08:24:24.473136       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 08:24:24.473625       1 server_linux.go:132] "Using iptables Proxier"
	I1217 08:24:24.489420       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:24:24.490375       1 server.go:527] "Version info" version="v1.34.3"
	I1217 08:24:24.490835       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:24:24.495916       1 config.go:200] "Starting service config controller"
	I1217 08:24:24.495929       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:24:24.495941       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:24:24.495944       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:24:24.495953       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:24:24.495957       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:24:24.498702       1 config.go:309] "Starting node config controller"
	I1217 08:24:24.499760       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:24:24.499924       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 08:24:24.596845       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 08:24:24.596895       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 08:24:24.597330       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [3d925826c8f6f8fbf3c96ac2c39bef90c2c64073d2bb8523c2026f14a05cad7e] <==
	I1217 08:25:08.849135       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 08:25:08.950442       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 08:25:08.950565       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.97"]
	E1217 08:25:08.950716       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:25:08.992419       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 08:25:08.992478       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 08:25:08.992505       1 server_linux.go:132] "Using iptables Proxier"
	I1217 08:25:09.005187       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:25:09.005510       1 server.go:527] "Version info" version="v1.34.3"
	I1217 08:25:09.005536       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:25:09.010005       1 config.go:200] "Starting service config controller"
	I1217 08:25:09.010037       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:25:09.010055       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:25:09.010059       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:25:09.010077       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:25:09.010081       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:25:09.010846       1 config.go:309] "Starting node config controller"
	I1217 08:25:09.010853       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:25:09.010858       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 08:25:09.110469       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 08:25:09.110511       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 08:25:09.111175       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8e5b186eccdc4e422c6f76bbba5bc0ef1eb32b926601973fa064f3bcadfc7b7c] <==
	I1217 08:24:21.380666       1 serving.go:386] Generated self-signed cert in-memory
	I1217 08:24:23.268738       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 08:24:23.268799       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:24:23.273765       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 08:24:23.274282       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1217 08:24:23.274346       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1217 08:24:23.274390       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:24:23.274410       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:24:23.274438       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 08:24:23.274454       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 08:24:23.274349       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 08:24:23.376202       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1217 08:24:23.376326       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:24:23.376335       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 08:24:46.375307       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1217 08:24:46.375436       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1217 08:24:46.375449       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1217 08:24:46.375477       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 08:24:46.375536       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:24:46.375557       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1217 08:24:46.375741       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1217 08:24:46.381328       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a1ec936640a50471bba654dfa6356efdd70d3a5729b21c3a8f12de69136db7cf] <==
	I1217 08:25:05.215724       1 serving.go:386] Generated self-signed cert in-memory
	W1217 08:25:07.249449       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 08:25:07.249488       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 08:25:07.249499       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 08:25:07.249506       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 08:25:07.315539       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 08:25:07.315574       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:25:07.321072       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 08:25:07.321202       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:25:07.321215       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:25:07.321269       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 08:25:07.421806       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 08:30:53 functional-122342 kubelet[5610]: E1217 08:30:53.315052    5610 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765960253314680433  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243805}  inodes_used:{value:113}}"
	Dec 17 08:31:02 functional-122342 kubelet[5610]: E1217 08:31:02.933219    5610 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9/crio-37756fdf1a3717047c0fd184ce9a77415ea73d8ba538e154786cde5d6f4806c8: Error finding container 37756fdf1a3717047c0fd184ce9a77415ea73d8ba538e154786cde5d6f4806c8: Status 404 returned error can't find the container with id 37756fdf1a3717047c0fd184ce9a77415ea73d8ba538e154786cde5d6f4806c8
	Dec 17 08:31:02 functional-122342 kubelet[5610]: E1217 08:31:02.933669    5610 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod8ce8ca7de0471d348f97a4fbf14f4cf4/crio-34d0183982ab8397230f2901ed1e419778a678d9a64e966f6f60c06d4840cb51: Error finding container 34d0183982ab8397230f2901ed1e419778a678d9a64e966f6f60c06d4840cb51: Status 404 returned error can't find the container with id 34d0183982ab8397230f2901ed1e419778a678d9a64e966f6f60c06d4840cb51
	Dec 17 08:31:02 functional-122342 kubelet[5610]: E1217 08:31:02.933930    5610 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod3e2d39ca3e768afa6c2876cc33ec430d/crio-65d7100394df905a2f9450b95c42db3ef1ee66205b82099018b319f4a048e21d: Error finding container 65d7100394df905a2f9450b95c42db3ef1ee66205b82099018b319f4a048e21d: Status 404 returned error can't find the container with id 65d7100394df905a2f9450b95c42db3ef1ee66205b82099018b319f4a048e21d
	Dec 17 08:31:02 functional-122342 kubelet[5610]: E1217 08:31:02.934360    5610 manager.go:1116] Failed to create existing container: /kubepods/besteffort/poddaf12b32-6915-43d9-b1b0-c897d53bca11/crio-d57f6fa06bf8adabfb271001e8803287eb8ffadb89098032e8d2a68c25ee6f1a: Error finding container d57f6fa06bf8adabfb271001e8803287eb8ffadb89098032e8d2a68c25ee6f1a: Status 404 returned error can't find the container with id d57f6fa06bf8adabfb271001e8803287eb8ffadb89098032e8d2a68c25ee6f1a
	Dec 17 08:31:02 functional-122342 kubelet[5610]: E1217 08:31:02.934545    5610 manager.go:1116] Failed to create existing container: /kubepods/burstable/podba342c5421c70f9a936a60f9dc9b0678/crio-ff0e203b9a657d1d2b8e9034a0d8f8948a408b2d463c68ba29cacc577e6b0184: Error finding container ff0e203b9a657d1d2b8e9034a0d8f8948a408b2d463c68ba29cacc577e6b0184: Status 404 returned error can't find the container with id ff0e203b9a657d1d2b8e9034a0d8f8948a408b2d463c68ba29cacc577e6b0184
	Dec 17 08:31:02 functional-122342 kubelet[5610]: E1217 08:31:02.934816    5610 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod08fa188e-90ff-4dfe-86df-b40eef36765d/crio-546f8fb776a5c3846f1370782db9e94fd5fcd478121542ff2d33b26798b43225: Error finding container 546f8fb776a5c3846f1370782db9e94fd5fcd478121542ff2d33b26798b43225: Status 404 returned error can't find the container with id 546f8fb776a5c3846f1370782db9e94fd5fcd478121542ff2d33b26798b43225
	Dec 17 08:31:03 functional-122342 kubelet[5610]: E1217 08:31:03.317134    5610 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765960263316850369  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243805}  inodes_used:{value:113}}"
	Dec 17 08:31:03 functional-122342 kubelet[5610]: E1217 08:31:03.317177    5610 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765960263316850369  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243805}  inodes_used:{value:113}}"
	Dec 17 08:31:03 functional-122342 kubelet[5610]: E1217 08:31:03.391757    5610 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 17 08:31:03 functional-122342 kubelet[5610]: E1217 08:31:03.391799    5610 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 17 08:31:03 functional-122342 kubelet[5610]: E1217 08:31:03.392007    5610 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-76g5z_default(61c3ac51-6a79-43ce-b0ec-bd6c030bf75b): ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 17 08:31:03 functional-122342 kubelet[5610]: E1217 08:31:03.392046    5610 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-76g5z" podUID="61c3ac51-6a79-43ce-b0ec-bd6c030bf75b"
	Dec 17 08:31:13 functional-122342 kubelet[5610]: E1217 08:31:13.320110    5610 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765960273319638312  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243805}  inodes_used:{value:113}}"
	Dec 17 08:31:13 functional-122342 kubelet[5610]: E1217 08:31:13.320422    5610 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765960273319638312  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243805}  inodes_used:{value:113}}"
	Dec 17 08:31:14 functional-122342 kubelet[5610]: E1217 08:31:14.821435    5610 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-76g5z" podUID="61c3ac51-6a79-43ce-b0ec-bd6c030bf75b"
	Dec 17 08:31:23 functional-122342 kubelet[5610]: E1217 08:31:23.322515    5610 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765960283322171525  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243805}  inodes_used:{value:113}}"
	Dec 17 08:31:23 functional-122342 kubelet[5610]: E1217 08:31:23.322576    5610 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765960283322171525  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243805}  inodes_used:{value:113}}"
	Dec 17 08:31:29 functional-122342 kubelet[5610]: E1217 08:31:29.822080    5610 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-76g5z" podUID="61c3ac51-6a79-43ce-b0ec-bd6c030bf75b"
	Dec 17 08:31:33 functional-122342 kubelet[5610]: E1217 08:31:33.324130    5610 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765960293323829659  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243805}  inodes_used:{value:113}}"
	Dec 17 08:31:33 functional-122342 kubelet[5610]: E1217 08:31:33.324200    5610 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765960293323829659  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243805}  inodes_used:{value:113}}"
	Dec 17 08:31:34 functional-122342 kubelet[5610]: E1217 08:31:34.072328    5610 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 17 08:31:34 functional-122342 kubelet[5610]: E1217 08:31:34.072416    5610 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 17 08:31:34 functional-122342 kubelet[5610]: E1217 08:31:34.072620    5610 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-b8nbt_default(a03a3faa-9feb-43f7-86f2-a398df95eddd): ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 17 08:31:34 functional-122342 kubelet[5610]: E1217 08:31:34.073278    5610 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-b8nbt" podUID="a03a3faa-9feb-43f7-86f2-a398df95eddd"
	
	
	==> storage-provisioner [3dbb693f0f59248e6ead291b0e92aba3429534efec95a15bc4cda79405de7355] <==
	W1217 08:31:15.426678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:17.430064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:17.438005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:19.441205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:19.446045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:21.449002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:21.456856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:23.460734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:23.466080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:25.469887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:25.478599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:27.481854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:27.486935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:29.489679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:29.494559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:31.498548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:31.504118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:33.506832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:33.514847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:35.518500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:35.526570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:37.529089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:37.533573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:39.537339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:31:39.546784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dba1678e8c75f270f557406de1ea4d54c058405d51078888b5a24b251f0680b2] <==
	I1217 08:24:24.152013       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 08:24:24.184290       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 08:24:24.184528       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 08:24:24.190203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:24:27.645013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:24:31.904931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:24:35.505512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:24:38.559669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:24:41.581804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:24:41.593137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:24:41.593895       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 08:24:41.594111       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-122342_d356f3bd-f6df-4a27-b67c-4da63394224b!
	I1217 08:24:41.595849       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"159536d6-e36d-4069-a75a-1c5c38b11b6e", APIVersion:"v1", ResourceVersion:"546", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-122342_d356f3bd-f6df-4a27-b67c-4da63394224b became leader
	W1217 08:24:41.598733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:24:41.607715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:24:41.695516       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-122342_d356f3bd-f6df-4a27-b67c-4da63394224b!
	W1217 08:24:43.611819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:24:43.616110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:24:45.619287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:24:45.624300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-122342 -n functional-122342
helpers_test.go:270: (dbg) Run:  kubectl --context functional-122342 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-75c85bcc94-76g5z hello-node-connect-7d85dfc575-b8nbt dashboard-metrics-scraper-77bf4d6c4c-xb94z kubernetes-dashboard-855c9754f9-wzrhk
helpers_test.go:283: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-122342 describe pod busybox-mount hello-node-75c85bcc94-76g5z hello-node-connect-7d85dfc575-b8nbt dashboard-metrics-scraper-77bf4d6c4c-xb94z kubernetes-dashboard-855c9754f9-wzrhk
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-122342 describe pod busybox-mount hello-node-75c85bcc94-76g5z hello-node-connect-7d85dfc575-b8nbt dashboard-metrics-scraper-77bf4d6c4c-xb94z kubernetes-dashboard-855c9754f9-wzrhk: exit status 1 (84.41511ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-122342/192.168.39.97
	Start Time:       Wed, 17 Dec 2025 08:25:31 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://78c0b6faa76255f96dd62af5fd66262225c7dd99481de1050acd67d212d3a5cb
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 17 Dec 2025 08:26:32 +0000
	      Finished:     Wed, 17 Dec 2025 08:26:32 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-69vrq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-69vrq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  6m9s  default-scheduler  Successfully assigned default/busybox-mount to functional-122342
	  Normal  Pulling    6m9s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m9s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.023s (1m0.315s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m9s  kubelet            Created container: mount-munger
	  Normal  Started    5m9s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-76g5z
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-122342/192.168.39.97
	Start Time:       Wed, 17 Dec 2025 08:25:29 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bv45r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bv45r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m12s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-76g5z to functional-122342
	  Normal   Pulling    2m37s (x3 over 6m12s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     38s (x3 over 5m10s)    kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     38s (x3 over 5m10s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    12s (x4 over 5m10s)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     12s (x4 over 5m10s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-b8nbt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-122342/192.168.39.97
	Start Time:       Wed, 17 Dec 2025 08:25:29 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8zczn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8zczn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m12s                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-b8nbt to functional-122342
	  Warning  Failed     5m41s                 kubelet            Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    109s (x4 over 5m40s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     109s (x4 over 5m40s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    96s (x4 over 6m12s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7s (x4 over 5m41s)    kubelet            Error: ErrImagePull
	  Warning  Failed     7s (x3 over 4m35s)    kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-xb94z" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wzrhk" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-122342 describe pod busybox-mount hello-node-75c85bcc94-76g5z hello-node-connect-7d85dfc575-b8nbt dashboard-metrics-scraper-77bf4d6c4c-xb94z kubernetes-dashboard-855c9754f9-wzrhk: exit status 1
E1217 08:32:45.365680  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/DashboardCmd (302.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-122342 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-122342 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-b8nbt" [a03a3faa-9feb-43f7-86f2-a398df95eddd] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:338: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-122342 -n functional-122342
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-17 08:35:29.311783133 +0000 UTC m=+1204.046980483
functional_test.go:1645: (dbg) Run:  kubectl --context functional-122342 describe po hello-node-connect-7d85dfc575-b8nbt -n default
functional_test.go:1645: (dbg) kubectl --context functional-122342 describe po hello-node-connect-7d85dfc575-b8nbt -n default:
Name:             hello-node-connect-7d85dfc575-b8nbt
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-122342/192.168.39.97
Start Time:       Wed, 17 Dec 2025 08:25:29 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8zczn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8zczn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-b8nbt to functional-122342
Warning  Failed     9m29s                kubelet            Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m34s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     82s (x5 over 9m29s)  kubelet            Error: ErrImagePull
Warning  Failed     82s (x4 over 8m23s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    7s (x15 over 9m28s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     7s (x15 over 9m28s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-122342 logs hello-node-connect-7d85dfc575-b8nbt -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-122342 logs hello-node-connect-7d85dfc575-b8nbt -n default: exit status 1 (82.574936ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-b8nbt" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-122342 logs hello-node-connect-7d85dfc575-b8nbt -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-122342 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-b8nbt
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-122342/192.168.39.97
Start Time:       Wed, 17 Dec 2025 08:25:29 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8zczn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8zczn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-b8nbt to functional-122342
Warning  Failed     9m29s                kubelet            Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m34s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     82s (x5 over 9m29s)  kubelet            Error: ErrImagePull
Warning  Failed     82s (x4 over 8m23s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    7s (x15 over 9m28s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     7s (x15 over 9m28s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-122342 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-122342 logs -l app=hello-node-connect: exit status 1 (73.245607ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-b8nbt" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-122342 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-122342 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.184.126
IPs:                      10.96.184.126
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30826/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-122342 -n functional-122342
helpers_test.go:253: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-122342 logs -n 25: (1.372108737s)
helpers_test.go:261: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-122342 image ls                                                                                                                                   │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ image          │ functional-122342 image save kicbase/echo-server:functional-122342 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ image          │ functional-122342 image rm kicbase/echo-server:functional-122342 --alsologtostderr                                                                           │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ image          │ functional-122342 image ls                                                                                                                                   │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ image          │ functional-122342 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ image          │ functional-122342 image ls                                                                                                                                   │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ image          │ functional-122342 image save --daemon kicbase/echo-server:functional-122342 --alsologtostderr                                                                │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ ssh            │ functional-122342 ssh sudo cat /etc/ssl/certs/897277.pem                                                                                                     │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ ssh            │ functional-122342 ssh sudo cat /usr/share/ca-certificates/897277.pem                                                                                         │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ ssh            │ functional-122342 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                     │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ ssh            │ functional-122342 ssh sudo cat /etc/ssl/certs/8972772.pem                                                                                                    │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ ssh            │ functional-122342 ssh sudo cat /usr/share/ca-certificates/8972772.pem                                                                                        │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ ssh            │ functional-122342 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                     │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ ssh            │ functional-122342 ssh sudo cat /etc/test/nested/copy/897277/hosts                                                                                            │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:26 UTC │ 17 Dec 25 08:26 UTC │
	│ image          │ functional-122342 image ls --format short --alsologtostderr                                                                                                  │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:29 UTC │ 17 Dec 25 08:29 UTC │
	│ image          │ functional-122342 image ls --format yaml --alsologtostderr                                                                                                   │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:29 UTC │ 17 Dec 25 08:29 UTC │
	│ ssh            │ functional-122342 ssh pgrep buildkitd                                                                                                                        │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:29 UTC │                     │
	│ image          │ functional-122342 image build -t localhost/my-image:functional-122342 testdata/build --alsologtostderr                                                       │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:29 UTC │ 17 Dec 25 08:29 UTC │
	│ image          │ functional-122342 image ls                                                                                                                                   │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:29 UTC │ 17 Dec 25 08:29 UTC │
	│ image          │ functional-122342 image ls --format table --alsologtostderr                                                                                                  │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:29 UTC │ 17 Dec 25 08:29 UTC │
	│ image          │ functional-122342 image ls --format json --alsologtostderr                                                                                                   │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:29 UTC │ 17 Dec 25 08:29 UTC │
	│ update-context │ functional-122342 update-context --alsologtostderr -v=2                                                                                                      │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:29 UTC │ 17 Dec 25 08:29 UTC │
	│ update-context │ functional-122342 update-context --alsologtostderr -v=2                                                                                                      │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:29 UTC │ 17 Dec 25 08:29 UTC │
	│ update-context │ functional-122342 update-context --alsologtostderr -v=2                                                                                                      │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:29 UTC │ 17 Dec 25 08:29 UTC │
	│ service        │ functional-122342 service list                                                                                                                               │ functional-122342 │ jenkins │ v1.37.0 │ 17 Dec 25 08:35 UTC │                     │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:26:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:26:39.388994  903836 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:26:39.389100  903836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:26:39.389109  903836 out.go:374] Setting ErrFile to fd 2...
	I1217 08:26:39.389113  903836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:26:39.389282  903836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 08:26:39.389700  903836 out.go:368] Setting JSON to false
	I1217 08:26:39.390571  903836 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11345,"bootTime":1765948654,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:26:39.390633  903836 start.go:143] virtualization: kvm guest
	I1217 08:26:39.392213  903836 out.go:179] * [functional-122342] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:26:39.393535  903836 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:26:39.393544  903836 notify.go:221] Checking for updates...
	I1217 08:26:39.395416  903836 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:26:39.396707  903836 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	I1217 08:26:39.397697  903836 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 08:26:39.398675  903836 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:26:39.399610  903836 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:26:39.401104  903836 config.go:182] Loaded profile config "functional-122342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:26:39.401836  903836 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:26:39.431419  903836 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 08:26:39.432431  903836 start.go:309] selected driver: kvm2
	I1217 08:26:39.432446  903836 start.go:927] validating driver "kvm2" against &{Name:functional-122342 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-122342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:26:39.432577  903836 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:26:39.433465  903836 cni.go:84] Creating CNI manager for ""
	I1217 08:26:39.433569  903836 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 08:26:39.433640  903836 start.go:353] cluster config:
	{Name:functional-122342 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-122342 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:26:39.434839  903836 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.359543984Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=af4e7745-992f-4bd8-8b11-58d671e4508d name=/runtime.v1.RuntimeService/Version
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.360925659Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ba31ec37-5033-4ae8-906f-973d8a8d7213 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.362069629Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765960530362045232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:243805,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba31ec37-5033-4ae8-906f-973d8a8d7213 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.363000317Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3225224-09c4-4040-851e-2d231178ea04 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.363054893Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3225224-09c4-4040-851e-2d231178ea04 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.364671181Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b60691b33127eb1d69e1d578a2df5194074c8afb6040d0e859e225b2a7faf00b,PodSandboxId:68d561cd832ffedc54088464d71a19192613de8752150781690c77e00d0d3ac3,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765960134445008730,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-g9l2q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2acc9aa8-e16c-4de2-8104-2731803a9cc0,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb72c7423d781bbd6336ed4a40839f09a28f7732d98aa1988a91d149b13d536,PodSandboxId:8162d268f90553392b21f8e4efa0af39292098ae35c0c24e47eae372f6246e9b,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765960002487351390,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db4392ab-d4a8-4176-a0d5-5e79bc4d77ed,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78c0b6faa76255f96dd62af5fd66262225c7dd99481de1050acd67d212d3a5cb,PodSandboxId:132c90e54869af00ad8a1cc4d25020ae19d0c9956e1c1d259f3759dac16cf034,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765959992443509180,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ab11f47f-3405-419b-90f6-1e11c8e5cd9e,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3424101cf26fda518503e05bf7b2d5e7236ea52cdc904f8a164497e3ec7063,PodSandboxId:71e2c74872b5a33003691fff2ed5442545cca707510771d0e0b74eb1516adf7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765959908735934561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zpmv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08fa188e-90ff-4dfe-86df-b40eef36765d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d925826c8f6f8fbf3c96ac2c39bef90c2c64073d2bb8523c2026f14a05cad7e,PodSandboxId:8a40b453568e36ababc4eeadcb175c32b9c97488d4dde5746b6559f9bb078156,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_R
UNNING,CreatedAt:1765959908522452636,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-954rb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daf12b32-6915-43d9-b1b0-c897d53bca11,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbb693f0f59248e6ead291b0e92aba3429534efec95a15bc4cda79405de7355,PodSandboxId:3a827042a4940a14c587f4cec329b8ba40f0107254e0b0ed9748d6b967c4d911,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765959
908453690378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00bd056609476d6ee4f2b20d1d2344e721e30762f27a0687744b8b3486f3149,PodSandboxId:3e1ed5f767039e00e4df38de35f9176c60ec1c8af91b99d507c5d302d1931135,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765959903804915433,Labels:map[str
ing]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba342c5421c70f9a936a60f9dc9b0678,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:238d16a286b7c1e3f843ca9f49efa5e9c395f1d4f26eb3f2285acd0dbeacc6d0,PodSandboxId:82fb78bdc40026c80bf5943e939619fabe51c64ec5f5cacb57dc8ccda1c6bd15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736
d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765959903702953944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300532227cce2617d7152b0b0ce7d38d,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8690c53de65dfa345567ecd5227894357e1b82b998968d2fd4d632439c1e17e3,PodSandboxId:e38614e0cd7b821d98279f976cd02f486d895c06fb834319ae2378e85efee607,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotat
ions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765959903731882170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2d39ca3e768afa6c2876cc33ec430d,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ec936640a50471bba654dfa6356efdd70d3a5729b21c3a8f12de69136db7cf,PodSandboxId:ef656908d2b6599d327cb98383f24340bb090f05d05c11ef96e5e7634482c200,Metadata:&ContainerMetadata
{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765959903722314004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ce8ca7de0471d348f97a4fbf14f4cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:733f311df4b6f138b4447648303dc3ccf991f4465c1a805ce4639457f18e
318c,PodSandboxId:546f8fb776a5c3846f1370782db9e94fd5fcd478121542ff2d33b26798b43225,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765959864347695593,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zpmv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08fa188e-90ff-4dfe-86df-b40eef36765d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness
-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26dd379d79c851d78e49afda695d69aa1a1496132def151ccc64aca74a55ba04,PodSandboxId:d57f6fa06bf8adabfb271001e8803287eb8ffadb89098032e8d2a68c25ee6f1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765959863911689903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-954rb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daf12b32-6915-43d9-b1b0-c897d53bca11,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba1678e8c75f270f557406de1ea4d54c058405d51078888b5a24b251f0680b2,PodSandboxId:37756fdf1a3717047c0fd184ce9a77415ea73d8ba538e154786cde5d6f4806c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765959863932722534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5b186eccdc4e422c6f76bbba5bc0ef1eb32b926601973fa064f3bcadfc7b7c,PodSandboxId:34d0183982ab8397230f2901ed1e419778a678d9a64e966f6f60c06d4840cb51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765959860173761568,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ce8ca7de0471d348f97a4fbf14f4cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kub
ernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91e15e9406ddd182134e86ad3b00b12cb5607dc046505f7a127894a576e9419,PodSandboxId:65d7100394df905a2f9450b95c42db3ef1ee66205b82099018b319f4a048e21d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765959860175676559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-122342,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 3e2d39ca3e768afa6c2876cc33ec430d,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77651de2ba10aee37eab124a6b2a972669382d5aa8f7ca2419c76bc2a15bff59,PodSandboxId:ff0e203b9a657d1d2b8e9034a0d8f8948a408b2d463c68ba29cacc577e6b0184,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765959860074675831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba342c5421c70f9a936a60f9dc9b0678,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3225224-09c4-4040-851e-2d231178ea04 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.401412049Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6ea8459f-97bd-4711-bbd7-9fd0c64e5b8b name=/runtime.v1.RuntimeService/Version
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.401551568Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6ea8459f-97bd-4711-bbd7-9fd0c64e5b8b name=/runtime.v1.RuntimeService/Version
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.402658094Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c433dc0-0323-41d2-829e-7020e1806e5a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.403384351Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765960530403362156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:243805,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c433dc0-0323-41d2-829e-7020e1806e5a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.404204453Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70b5da15-8a52-4edc-ab57-9582370dfdc7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.404316140Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70b5da15-8a52-4edc-ab57-9582370dfdc7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.404694113Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b60691b33127eb1d69e1d578a2df5194074c8afb6040d0e859e225b2a7faf00b,PodSandboxId:68d561cd832ffedc54088464d71a19192613de8752150781690c77e00d0d3ac3,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765960134445008730,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-g9l2q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2acc9aa8-e16c-4de2-8104-2731803a9cc0,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb72c7423d781bbd6336ed4a40839f09a28f7732d98aa1988a91d149b13d536,PodSandboxId:8162d268f90553392b21f8e4efa0af39292098ae35c0c24e47eae372f6246e9b,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765960002487351390,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db4392ab-d4a8-4176-a0d5-5e79bc4d77ed,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78c0b6faa76255f96dd62af5fd66262225c7dd99481de1050acd67d212d3a5cb,PodSandboxId:132c90e54869af00ad8a1cc4d25020ae19d0c9956e1c1d259f3759dac16cf034,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765959992443509180,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ab11f47f-3405-419b-90f6-1e11c8e5cd9e,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3424101cf26fda518503e05bf7b2d5e7236ea52cdc904f8a164497e3ec7063,PodSandboxId:71e2c74872b5a33003691fff2ed5442545cca707510771d0e0b74eb1516adf7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765959908735934561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zpmv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08fa188e-90ff-4dfe-86df-b40eef36765d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d925826c8f6f8fbf3c96ac2c39bef90c2c64073d2bb8523c2026f14a05cad7e,PodSandboxId:8a40b453568e36ababc4eeadcb175c32b9c97488d4dde5746b6559f9bb078156,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_R
UNNING,CreatedAt:1765959908522452636,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-954rb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daf12b32-6915-43d9-b1b0-c897d53bca11,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbb693f0f59248e6ead291b0e92aba3429534efec95a15bc4cda79405de7355,PodSandboxId:3a827042a4940a14c587f4cec329b8ba40f0107254e0b0ed9748d6b967c4d911,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765959
908453690378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00bd056609476d6ee4f2b20d1d2344e721e30762f27a0687744b8b3486f3149,PodSandboxId:3e1ed5f767039e00e4df38de35f9176c60ec1c8af91b99d507c5d302d1931135,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765959903804915433,Labels:map[str
ing]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba342c5421c70f9a936a60f9dc9b0678,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:238d16a286b7c1e3f843ca9f49efa5e9c395f1d4f26eb3f2285acd0dbeacc6d0,PodSandboxId:82fb78bdc40026c80bf5943e939619fabe51c64ec5f5cacb57dc8ccda1c6bd15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736
d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765959903702953944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300532227cce2617d7152b0b0ce7d38d,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8690c53de65dfa345567ecd5227894357e1b82b998968d2fd4d632439c1e17e3,PodSandboxId:e38614e0cd7b821d98279f976cd02f486d895c06fb834319ae2378e85efee607,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotat
ions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765959903731882170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2d39ca3e768afa6c2876cc33ec430d,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ec936640a50471bba654dfa6356efdd70d3a5729b21c3a8f12de69136db7cf,PodSandboxId:ef656908d2b6599d327cb98383f24340bb090f05d05c11ef96e5e7634482c200,Metadata:&ContainerMetadata
{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765959903722314004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ce8ca7de0471d348f97a4fbf14f4cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:733f311df4b6f138b4447648303dc3ccf991f4465c1a805ce4639457f18e
318c,PodSandboxId:546f8fb776a5c3846f1370782db9e94fd5fcd478121542ff2d33b26798b43225,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765959864347695593,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zpmv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08fa188e-90ff-4dfe-86df-b40eef36765d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness
-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26dd379d79c851d78e49afda695d69aa1a1496132def151ccc64aca74a55ba04,PodSandboxId:d57f6fa06bf8adabfb271001e8803287eb8ffadb89098032e8d2a68c25ee6f1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765959863911689903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-954rb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daf12b32-6915-43d9-b1b0-c897d53bca11,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba1678e8c75f270f557406de1ea4d54c058405d51078888b5a24b251f0680b2,PodSandboxId:37756fdf1a3717047c0fd184ce9a77415ea73d8ba538e154786cde5d6f4806c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765959863932722534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5b186eccdc4e422c6f76bbba5bc0ef1eb32b926601973fa064f3bcadfc7b7c,PodSandboxId:34d0183982ab8397230f2901ed1e419778a678d9a64e966f6f60c06d4840cb51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765959860173761568,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ce8ca7de0471d348f97a4fbf14f4cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kub
ernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91e15e9406ddd182134e86ad3b00b12cb5607dc046505f7a127894a576e9419,PodSandboxId:65d7100394df905a2f9450b95c42db3ef1ee66205b82099018b319f4a048e21d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765959860175676559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-122342,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 3e2d39ca3e768afa6c2876cc33ec430d,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77651de2ba10aee37eab124a6b2a972669382d5aa8f7ca2419c76bc2a15bff59,PodSandboxId:ff0e203b9a657d1d2b8e9034a0d8f8948a408b2d463c68ba29cacc577e6b0184,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765959860074675831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba342c5421c70f9a936a60f9dc9b0678,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=70b5da15-8a52-4edc-ab57-9582370dfdc7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.433930664Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b8c3c859-00d4-447a-a9d3-f76555331aab name=/runtime.v1.RuntimeService/Version
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.434017702Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8c3c859-00d4-447a-a9d3-f76555331aab name=/runtime.v1.RuntimeService/Version
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.436651659Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a6607b7-d07c-46e1-9beb-18151d5e8320 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.437607878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765960530437579437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:243805,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a6607b7-d07c-46e1-9beb-18151d5e8320 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.438547179Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f65140b-4bc4-4196-b928-10205ad650b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.438623754Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f65140b-4bc4-4196-b928-10205ad650b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.438930270Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b60691b33127eb1d69e1d578a2df5194074c8afb6040d0e859e225b2a7faf00b,PodSandboxId:68d561cd832ffedc54088464d71a19192613de8752150781690c77e00d0d3ac3,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765960134445008730,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-g9l2q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2acc9aa8-e16c-4de2-8104-2731803a9cc0,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb72c7423d781bbd6336ed4a40839f09a28f7732d98aa1988a91d149b13d536,PodSandboxId:8162d268f90553392b21f8e4efa0af39292098ae35c0c24e47eae372f6246e9b,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765960002487351390,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db4392ab-d4a8-4176-a0d5-5e79bc4d77ed,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78c0b6faa76255f96dd62af5fd66262225c7dd99481de1050acd67d212d3a5cb,PodSandboxId:132c90e54869af00ad8a1cc4d25020ae19d0c9956e1c1d259f3759dac16cf034,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765959992443509180,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ab11f47f-3405-419b-90f6-1e11c8e5cd9e,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3424101cf26fda518503e05bf7b2d5e7236ea52cdc904f8a164497e3ec7063,PodSandboxId:71e2c74872b5a33003691fff2ed5442545cca707510771d0e0b74eb1516adf7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765959908735934561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zpmv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08fa188e-90ff-4dfe-86df-b40eef36765d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d925826c8f6f8fbf3c96ac2c39bef90c2c64073d2bb8523c2026f14a05cad7e,PodSandboxId:8a40b453568e36ababc4eeadcb175c32b9c97488d4dde5746b6559f9bb078156,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_R
UNNING,CreatedAt:1765959908522452636,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-954rb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daf12b32-6915-43d9-b1b0-c897d53bca11,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbb693f0f59248e6ead291b0e92aba3429534efec95a15bc4cda79405de7355,PodSandboxId:3a827042a4940a14c587f4cec329b8ba40f0107254e0b0ed9748d6b967c4d911,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765959
908453690378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00bd056609476d6ee4f2b20d1d2344e721e30762f27a0687744b8b3486f3149,PodSandboxId:3e1ed5f767039e00e4df38de35f9176c60ec1c8af91b99d507c5d302d1931135,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765959903804915433,Labels:map[str
ing]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba342c5421c70f9a936a60f9dc9b0678,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:238d16a286b7c1e3f843ca9f49efa5e9c395f1d4f26eb3f2285acd0dbeacc6d0,PodSandboxId:82fb78bdc40026c80bf5943e939619fabe51c64ec5f5cacb57dc8ccda1c6bd15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736
d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765959903702953944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300532227cce2617d7152b0b0ce7d38d,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8690c53de65dfa345567ecd5227894357e1b82b998968d2fd4d632439c1e17e3,PodSandboxId:e38614e0cd7b821d98279f976cd02f486d895c06fb834319ae2378e85efee607,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotat
ions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765959903731882170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2d39ca3e768afa6c2876cc33ec430d,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ec936640a50471bba654dfa6356efdd70d3a5729b21c3a8f12de69136db7cf,PodSandboxId:ef656908d2b6599d327cb98383f24340bb090f05d05c11ef96e5e7634482c200,Metadata:&ContainerMetadata
{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765959903722314004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ce8ca7de0471d348f97a4fbf14f4cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:733f311df4b6f138b4447648303dc3ccf991f4465c1a805ce4639457f18e
318c,PodSandboxId:546f8fb776a5c3846f1370782db9e94fd5fcd478121542ff2d33b26798b43225,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765959864347695593,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zpmv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08fa188e-90ff-4dfe-86df-b40eef36765d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness
-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26dd379d79c851d78e49afda695d69aa1a1496132def151ccc64aca74a55ba04,PodSandboxId:d57f6fa06bf8adabfb271001e8803287eb8ffadb89098032e8d2a68c25ee6f1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765959863911689903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-954rb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daf12b32-6915-43d9-b1b0-c897d53bca11,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba1678e8c75f270f557406de1ea4d54c058405d51078888b5a24b251f0680b2,PodSandboxId:37756fdf1a3717047c0fd184ce9a77415ea73d8ba538e154786cde5d6f4806c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765959863932722534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5b186eccdc4e422c6f76bbba5bc0ef1eb32b926601973fa064f3bcadfc7b7c,PodSandboxId:34d0183982ab8397230f2901ed1e419778a678d9a64e966f6f60c06d4840cb51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765959860173761568,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ce8ca7de0471d348f97a4fbf14f4cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kub
ernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91e15e9406ddd182134e86ad3b00b12cb5607dc046505f7a127894a576e9419,PodSandboxId:65d7100394df905a2f9450b95c42db3ef1ee66205b82099018b319f4a048e21d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765959860175676559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-122342,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 3e2d39ca3e768afa6c2876cc33ec430d,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77651de2ba10aee37eab124a6b2a972669382d5aa8f7ca2419c76bc2a15bff59,PodSandboxId:ff0e203b9a657d1d2b8e9034a0d8f8948a408b2d463c68ba29cacc577e6b0184,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765959860074675831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba342c5421c70f9a936a60f9dc9b0678,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f65140b-4bc4-4196-b928-10205ad650b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.464970312Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=e9092034-db6e-4745-8238-2b365c112d90 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.465415900Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:68d561cd832ffedc54088464d71a19192613de8752150781690c77e00d0d3ac3,Metadata:&PodSandboxMetadata{Name:mysql-6bcdcbc558-g9l2q,Uid:2acc9aa8-e16c-4de2-8104-2731803a9cc0,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765960016337633017,Labels:map[string]string{app: mysql,io.kubernetes.container.name: POD,io.kubernetes.pod.name: mysql-6bcdcbc558-g9l2q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2acc9aa8-e16c-4de2-8104-2731803a9cc0,pod-template-hash: 6bcdcbc558,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:26:56.016323211Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8162d268f90553392b21f8e4efa0af39292098ae35c0c24e47eae372f6246e9b,Metadata:&PodSandboxMetadata{Name:sp-pod,Uid:db4392ab-d4a8-4176-a0d5-5e79bc4d77ed,Namespace:default,Attempt:0,},State:SANDBOX_READY,Cre
atedAt:1765960002273998794,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db4392ab-d4a8-4176-a0d5-5e79bc4d77ed,test: storage-provisioner,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"test\":\"storage-provisioner\"},\"name\":\"sp-pod\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"public.ecr.aws/nginx/nginx:alpine\",\"name\":\"myfrontend\",\"volumeMounts\":[{\"mountPath\":\"/tmp/mount\",\"name\":\"mypd\"}]}],\"volumes\":[{\"name\":\"mypd\",\"persistentVolumeClaim\":{\"claimName\":\"myclaim\"}}]}}\n,kubernetes.io/config.seen: 2025-12-17T08:26:41.956631838Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4f35053ae38159e5eeef19f123a90edfef9bfd71c9f241db7d293abe348d4e2a,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-77bf4d6c4c-xb94z,Uid:409
0411a-f6a9-4368-b45c-9ebe9d0f52a6,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765960000881368285,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c4c-xb94z,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4090411a-f6a9-4368-b45c-9ebe9d0f52a6,k8s-app: dashboard-metrics-scraper,pod-template-hash: 77bf4d6c4c,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:26:40.560348946Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:bd93e3a51b325a0f0ae9090ec5e4263303e8a4e2c9cb99bd8056b90d1bcdfbb2,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-855c9754f9-wzrhk,Uid:a3799b22-1e17-4740-b6ba-7efe7afa6d21,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765960000784852235,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name:
kubernetes-dashboard-855c9754f9-wzrhk,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: a3799b22-1e17-4740-b6ba-7efe7afa6d21,k8s-app: kubernetes-dashboard,pod-template-hash: 855c9754f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:26:40.467998653Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:132c90e54869af00ad8a1cc4d25020ae19d0c9956e1c1d259f3759dac16cf034,Metadata:&PodSandboxMetadata{Name:busybox-mount,Uid:ab11f47f-3405-419b-90f6-1e11c8e5cd9e,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1765959931911437970,Labels:map[string]string{integration-test: busybox-mount,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ab11f47f-3405-419b-90f6-1e11c8e5cd9e,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:25:31.594719626Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8b0d03e658e28883ec052dfa2659508e3a8a
379eb1aa0f279d5b469254cec975,Metadata:&PodSandboxMetadata{Name:hello-node-75c85bcc94-76g5z,Uid:61c3ac51-6a79-43ce-b0ec-bd6c030bf75b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765959929442651510,Labels:map[string]string{app: hello-node,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-75c85bcc94-76g5z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61c3ac51-6a79-43ce-b0ec-bd6c030bf75b,pod-template-hash: 75c85bcc94,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:25:29.115048595Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f82ff1e0be550f5d8701a0effd96dcd60ee55f8b86579c53f1450f8f65a32b1a,Metadata:&PodSandboxMetadata{Name:hello-node-connect-7d85dfc575-b8nbt,Uid:a03a3faa-9feb-43f7-86f2-a398df95eddd,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765959929324871873,Labels:map[string]string{app: hello-node-connect,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-b8nbt,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a03a3faa-9feb-43f7-86f2-a398df95eddd,pod-template-hash: 7d85dfc575,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:25:29.004092622Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:71e2c74872b5a33003691fff2ed5442545cca707510771d0e0b74eb1516adf7a,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-zpmv6,Uid:08fa188e-90ff-4dfe-86df-b40eef36765d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1765959908242757769,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-zpmv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08fa188e-90ff-4dfe-86df-b40eef36765d,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:25:07.775869944Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8a40b453568e36ababc4eeadcb175c32b9c97488d4dde5746b6559f9bb078156,Metadata:&P
odSandboxMetadata{Name:kube-proxy-954rb,Uid:daf12b32-6915-43d9-b1b0-c897d53bca11,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1765959908114359128,Labels:map[string]string{controller-revision-hash: 55c7cb7b75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-954rb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daf12b32-6915-43d9-b1b0-c897d53bca11,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:25:07.775878341Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3a827042a4940a14c587f4cec329b8ba40f0107254e0b0ed9748d6b967c4d911,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1765959908106994943,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-
provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-17T08:25:07.775880276Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3e1ed5f767039e00e4df38de35f9176c60ec1c8af
91b99d507c5d302d1931135,Metadata:&PodSandboxMetadata{Name:etcd-functional-122342,Uid:ba342c5421c70f9a936a60f9dc9b0678,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1765959903493364250,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba342c5421c70f9a936a60f9dc9b0678,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.97:2379,kubernetes.io/config.hash: ba342c5421c70f9a936a60f9dc9b0678,kubernetes.io/config.seen: 2025-12-17T08:25:02.779617050Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e38614e0cd7b821d98279f976cd02f486d895c06fb834319ae2378e85efee607,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-122342,Uid:3e2d39ca3e768afa6c2876cc33ec430d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1765959903479019328,Labels:map[string]string{
component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2d39ca3e768afa6c2876cc33ec430d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3e2d39ca3e768afa6c2876cc33ec430d,kubernetes.io/config.seen: 2025-12-17T08:25:02.779622401Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ef656908d2b6599d327cb98383f24340bb090f05d05c11ef96e5e7634482c200,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-122342,Uid:8ce8ca7de0471d348f97a4fbf14f4cf4,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1765959903460708404,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ce8ca7de0471d348f97a4fbf14f4cf4,tier: control-plane,},Annotations:map[string]string{kubernete
s.io/config.hash: 8ce8ca7de0471d348f97a4fbf14f4cf4,kubernetes.io/config.seen: 2025-12-17T08:25:02.779623329Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:82fb78bdc40026c80bf5943e939619fabe51c64ec5f5cacb57dc8ccda1c6bd15,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-122342,Uid:300532227cce2617d7152b0b0ce7d38d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765959903458821084,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300532227cce2617d7152b0b0ce7d38d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.97:8441,kubernetes.io/config.hash: 300532227cce2617d7152b0b0ce7d38d,kubernetes.io/config.seen: 2025-12-17T08:25:02.779621074Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:546f8fb776a5c3846f1370782db9e94
fd5fcd478121542ff2d33b26798b43225,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-zpmv6,Uid:08fa188e-90ff-4dfe-86df-b40eef36765d,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765959863856130731,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-zpmv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08fa188e-90ff-4dfe-86df-b40eef36765d,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:24:23.396873067Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d57f6fa06bf8adabfb271001e8803287eb8ffadb89098032e8d2a68c25ee6f1a,Metadata:&PodSandboxMetadata{Name:kube-proxy-954rb,Uid:daf12b32-6915-43d9-b1b0-c897d53bca11,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765959863738731040,Labels:map[string]string{controller-revision-hash: 55c7cb7b75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-954rb,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daf12b32-6915-43d9-b1b0-c897d53bca11,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:24:23.396878161Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:37756fdf1a3717047c0fd184ce9a77415ea73d8ba538e154786cde5d6f4806c8,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765959863728146224,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.
io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-17T08:24:23.396865590Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:34d0183982ab8397230f2901ed1e419778a678d9a64e966f6f60c06d4840cb51,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-122342,Uid:8ce8ca7de0471d348f97a4fbf14f4cf4,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765959859925595490,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kub
ernetes.pod.name: kube-scheduler-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ce8ca7de0471d348f97a4fbf14f4cf4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8ce8ca7de0471d348f97a4fbf14f4cf4,kubernetes.io/config.seen: 2025-12-17T08:24:19.399103933Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:65d7100394df905a2f9450b95c42db3ef1ee66205b82099018b319f4a048e21d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-122342,Uid:3e2d39ca3e768afa6c2876cc33ec430d,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765959859913048821,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2d39ca3e768afa6c2876cc33ec430d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3e2d39ca3e768afa6c2876cc33ec430d,kub
ernetes.io/config.seen: 2025-12-17T08:24:19.399103113Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ff0e203b9a657d1d2b8e9034a0d8f8948a408b2d463c68ba29cacc577e6b0184,Metadata:&PodSandboxMetadata{Name:etcd-functional-122342,Uid:ba342c5421c70f9a936a60f9dc9b0678,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765959859894616513,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba342c5421c70f9a936a60f9dc9b0678,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.97:2379,kubernetes.io/config.hash: ba342c5421c70f9a936a60f9dc9b0678,kubernetes.io/config.seen: 2025-12-17T08:24:19.399098889Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e9092034-db6e-4745-8238-2b365c112d90 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.466651989Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31904dd6-7408-4404-a1c2-156753f38cec name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.466704479Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31904dd6-7408-4404-a1c2-156753f38cec name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:35:30 functional-122342 crio[5246]: time="2025-12-17 08:35:30.467131974Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b60691b33127eb1d69e1d578a2df5194074c8afb6040d0e859e225b2a7faf00b,PodSandboxId:68d561cd832ffedc54088464d71a19192613de8752150781690c77e00d0d3ac3,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765960134445008730,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-6bcdcbc558-g9l2q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2acc9aa8-e16c-4de2-8104-2731803a9cc0,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb72c7423d781bbd6336ed4a40839f09a28f7732d98aa1988a91d149b13d536,PodSandboxId:8162d268f90553392b21f8e4efa0af39292098ae35c0c24e47eae372f6246e9b,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765960002487351390,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db4392ab-d4a8-4176-a0d5-5e79bc4d77ed,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78c0b6faa76255f96dd62af5fd66262225c7dd99481de1050acd67d212d3a5cb,PodSandboxId:132c90e54869af00ad8a1cc4d25020ae19d0c9956e1c1d259f3759dac16cf034,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765959992443509180,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ab11f47f-3405-419b-90f6-1e11c8e5cd9e,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3424101cf26fda518503e05bf7b2d5e7236ea52cdc904f8a164497e3ec7063,PodSandboxId:71e2c74872b5a33003691fff2ed5442545cca707510771d0e0b74eb1516adf7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765959908735934561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zpmv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08fa188e-90ff-4dfe-86df-b40eef36765d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.po
rts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d925826c8f6f8fbf3c96ac2c39bef90c2c64073d2bb8523c2026f14a05cad7e,PodSandboxId:8a40b453568e36ababc4eeadcb175c32b9c97488d4dde5746b6559f9bb078156,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_R
UNNING,CreatedAt:1765959908522452636,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-954rb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daf12b32-6915-43d9-b1b0-c897d53bca11,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbb693f0f59248e6ead291b0e92aba3429534efec95a15bc4cda79405de7355,PodSandboxId:3a827042a4940a14c587f4cec329b8ba40f0107254e0b0ed9748d6b967c4d911,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765959
908453690378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00bd056609476d6ee4f2b20d1d2344e721e30762f27a0687744b8b3486f3149,PodSandboxId:3e1ed5f767039e00e4df38de35f9176c60ec1c8af91b99d507c5d302d1931135,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765959903804915433,Labels:map[str
ing]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba342c5421c70f9a936a60f9dc9b0678,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:238d16a286b7c1e3f843ca9f49efa5e9c395f1d4f26eb3f2285acd0dbeacc6d0,PodSandboxId:82fb78bdc40026c80bf5943e939619fabe51c64ec5f5cacb57dc8ccda1c6bd15,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736
d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765959903702953944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300532227cce2617d7152b0b0ce7d38d,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8690c53de65dfa345567ecd5227894357e1b82b998968d2fd4d632439c1e17e3,PodSandboxId:e38614e0cd7b821d98279f976cd02f486d895c06fb834319ae2378e85efee607,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotat
ions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765959903731882170,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e2d39ca3e768afa6c2876cc33ec430d,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ec936640a50471bba654dfa6356efdd70d3a5729b21c3a8f12de69136db7cf,PodSandboxId:ef656908d2b6599d327cb98383f24340bb090f05d05c11ef96e5e7634482c200,Metadata:&ContainerMetadata
{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765959903722314004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ce8ca7de0471d348f97a4fbf14f4cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:733f311df4b6f138b4447648303dc3ccf991f4465c1a805ce4639457f18e
318c,PodSandboxId:546f8fb776a5c3846f1370782db9e94fd5fcd478121542ff2d33b26798b43225,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765959864347695593,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zpmv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08fa188e-90ff-4dfe-86df-b40eef36765d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness
-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26dd379d79c851d78e49afda695d69aa1a1496132def151ccc64aca74a55ba04,PodSandboxId:d57f6fa06bf8adabfb271001e8803287eb8ffadb89098032e8d2a68c25ee6f1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_EXITED,CreatedAt:1765959863911689903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-954rb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daf12b32-6915-43d9-b1b0-c897d53bca11,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba1678e8c75f270f557406de1ea4d54c058405d51078888b5a24b251f0680b2,PodSandboxId:37756fdf1a3717047c0fd184ce9a77415ea73d8ba538e154786cde5d6f4806c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765959863932722534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 6
c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5b186eccdc4e422c6f76bbba5bc0ef1eb32b926601973fa064f3bcadfc7b7c,PodSandboxId:34d0183982ab8397230f2901ed1e419778a678d9a64e966f6f60c06d4840cb51,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_EXITED,CreatedAt:1765959860173761568,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ce8ca7de0471d348f97a4fbf14f4cf4,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kub
ernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91e15e9406ddd182134e86ad3b00b12cb5607dc046505f7a127894a576e9419,PodSandboxId:65d7100394df905a2f9450b95c42db3ef1ee66205b82099018b319f4a048e21d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_EXITED,CreatedAt:1765959860175676559,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-122342,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 3e2d39ca3e768afa6c2876cc33ec430d,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77651de2ba10aee37eab124a6b2a972669382d5aa8f7ca2419c76bc2a15bff59,PodSandboxId:ff0e203b9a657d1d2b8e9034a0d8f8948a408b2d463c68ba29cacc577e6b0184,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765959860074675831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kub
ernetes.pod.name: etcd-functional-122342,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba342c5421c70f9a936a60f9dc9b0678,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31904dd6-7408-4404-a1c2-156753f38cec name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                         CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b60691b33127e       public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036   6 minutes ago       Running             mysql                     0                   68d561cd832ff       mysql-6bcdcbc558-g9l2q                      default
	deb72c7423d78       a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c                                              8 minutes ago       Running             myfrontend                0                   8162d268f9055       sp-pod                                      default
	78c0b6faa7625       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e           8 minutes ago       Exited              mount-munger              0                   132c90e54869a       busybox-mount                               default
	4c3424101cf26       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                              10 minutes ago      Running             coredns                   2                   71e2c74872b5a       coredns-66bc5c9577-zpmv6                    kube-system
	3d925826c8f6f       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                              10 minutes ago      Running             kube-proxy                2                   8a40b453568e3       kube-proxy-954rb                            kube-system
	3dbb693f0f592       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              10 minutes ago      Running             storage-provisioner       2                   3a827042a4940       storage-provisioner                         kube-system
	a00bd05660947       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              10 minutes ago      Running             etcd                      2                   3e1ed5f767039       etcd-functional-122342                      kube-system
	8690c53de65df       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                              10 minutes ago      Running             kube-controller-manager   2                   e38614e0cd7b8       kube-controller-manager-functional-122342   kube-system
	a1ec936640a50       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                              10 minutes ago      Running             kube-scheduler            2                   ef656908d2b65       kube-scheduler-functional-122342            kube-system
	238d16a286b7c       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                              10 minutes ago      Running             kube-apiserver            0                   82fb78bdc4002       kube-apiserver-functional-122342            kube-system
	733f311df4b6f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                              11 minutes ago      Exited              coredns                   1                   546f8fb776a5c       coredns-66bc5c9577-zpmv6                    kube-system
	dba1678e8c75f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              11 minutes ago      Exited              storage-provisioner       1                   37756fdf1a371       storage-provisioner                         kube-system
	26dd379d79c85       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                              11 minutes ago      Exited              kube-proxy                1                   d57f6fa06bf8a       kube-proxy-954rb                            kube-system
	b91e15e9406dd       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                              11 minutes ago      Exited              kube-controller-manager   1                   65d7100394df9       kube-controller-manager-functional-122342   kube-system
	8e5b186eccdc4       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                              11 minutes ago      Exited              kube-scheduler            1                   34d0183982ab8       kube-scheduler-functional-122342            kube-system
	77651de2ba10a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                              11 minutes ago      Exited              etcd                      1                   ff0e203b9a657       etcd-functional-122342                      kube-system
	
	
	==> coredns [4c3424101cf26fda518503e05bf7b2d5e7236ea52cdc904f8a164497e3ec7063] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35826 - 7078 "HINFO IN 971251318359296826.1105888965815954832. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.056436496s
	
	
	==> coredns [733f311df4b6f138b4447648303dc3ccf991f4465c1a805ce4639457f18e318c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34815 - 36151 "HINFO IN 8693930430730607389.7737780181467506304. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.110641538s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-122342
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-122342
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=functional-122342
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_23_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:23:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-122342
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:35:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:34:07 +0000   Wed, 17 Dec 2025 08:23:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:34:07 +0000   Wed, 17 Dec 2025 08:23:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:34:07 +0000   Wed, 17 Dec 2025 08:23:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 08:34:07 +0000   Wed, 17 Dec 2025 08:23:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    functional-122342
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 61d98f57dbb24a4cba49d15398ceb72c
	  System UUID:                61d98f57-dbb2-4a4c-ba49-d15398ceb72c
	  Boot ID:                    5e8a90d3-1ee9-49a4-ade8-286a96f0d59c
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-76g5z                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-b8nbt           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-6bcdcbc558-g9l2q                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    8m35s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m49s
	  kube-system                 coredns-66bc5c9577-zpmv6                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 etcd-functional-122342                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         11m
	  kube-system                 kube-apiserver-functional-122342              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-122342     200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-954rb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-122342              100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-xb94z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wzrhk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-122342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-122342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-122342 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-122342 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-122342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-122342 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                11m                kubelet          Node functional-122342 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-122342 event: Registered Node functional-122342 in Controller
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-122342 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-122342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-122342 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-122342 event: Registered Node functional-122342 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-122342 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-122342 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-122342 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-122342 event: Registered Node functional-122342 in Controller
	
	
	==> dmesg <==
	[  +0.000477] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.162886] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.082563] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.092465] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.130289] kauditd_printk_skb: 171 callbacks suppressed
	[  +6.143001] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.029686] kauditd_printk_skb: 254 callbacks suppressed
	[Dec17 08:24] kauditd_printk_skb: 39 callbacks suppressed
	[  +6.101094] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.563991] kauditd_printk_skb: 176 callbacks suppressed
	[ +14.146643] kauditd_printk_skb: 137 callbacks suppressed
	[  +0.111072] kauditd_printk_skb: 12 callbacks suppressed
	[Dec17 08:25] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.541699] kauditd_printk_skb: 168 callbacks suppressed
	[  +4.464470] kauditd_printk_skb: 133 callbacks suppressed
	[  +0.656489] kauditd_printk_skb: 169 callbacks suppressed
	[  +0.000268] kauditd_printk_skb: 32 callbacks suppressed
	[Dec17 08:26] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.838422] kauditd_printk_skb: 46 callbacks suppressed
	[ +13.673701] kauditd_printk_skb: 145 callbacks suppressed
	[Dec17 08:27] kauditd_printk_skb: 38 callbacks suppressed
	[Dec17 08:29] crun[9808]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +1.884545] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [77651de2ba10aee37eab124a6b2a972669382d5aa8f7ca2419c76bc2a15bff59] <==
	{"level":"warn","ts":"2025-12-17T08:24:22.464227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:22.474641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:22.485571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:22.489697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:22.497007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:22.504624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T08:24:22.581021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39754","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T08:24:46.378683Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-17T08:24:46.378825Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-122342","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"]}
	{"level":"error","ts":"2025-12-17T08:24:46.379563Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T08:24:46.462637Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T08:24:46.462691Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T08:24:46.462708Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f61fae125a956d36","current-leader-member-id":"f61fae125a956d36"}
	{"level":"info","ts":"2025-12-17T08:24:46.462758Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-17T08:24:46.462745Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-17T08:24:46.462804Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T08:24:46.462869Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T08:24:46.462876Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-17T08:24:46.462913Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.97:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T08:24:46.462920Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.97:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T08:24:46.462924Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.97:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T08:24:46.466216Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"error","ts":"2025-12-17T08:24:46.466294Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.97:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T08:24:46.466316Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2025-12-17T08:24:46.466322Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-122342","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"]}
	
	
	==> etcd [a00bd056609476d6ee4f2b20d1d2344e721e30762f27a0687744b8b3486f3149] <==
	{"level":"warn","ts":"2025-12-17T08:28:51.662434Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T08:28:51.315843Z","time spent":"346.570089ms","remote":"127.0.0.1:35730","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":1,"response size":1140,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 "}
	{"level":"warn","ts":"2025-12-17T08:28:51.657464Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"179.105953ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T08:28:51.662566Z","caller":"traceutil/trace.go:172","msg":"trace[1597772687] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1049; }","duration":"184.207291ms","start":"2025-12-17T08:28:51.478351Z","end":"2025-12-17T08:28:51.662558Z","steps":["trace[1597772687] 'agreement among raft nodes before linearized reading'  (duration: 179.100409ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:28:52.517514Z","caller":"traceutil/trace.go:172","msg":"trace[861961697] transaction","detail":"{read_only:false; response_revision:1051; number_of_response:1; }","duration":"151.008462ms","start":"2025-12-17T08:28:52.366492Z","end":"2025-12-17T08:28:52.517500Z","steps":["trace[861961697] 'process raft request'  (duration: 150.92497ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:28:54.062470Z","caller":"traceutil/trace.go:172","msg":"trace[1832892770] linearizableReadLoop","detail":"{readStateIndex:1167; appliedIndex:1167; }","duration":"238.978268ms","start":"2025-12-17T08:28:53.823479Z","end":"2025-12-17T08:28:54.062457Z","steps":["trace[1832892770] 'read index received'  (duration: 238.973921ms)","trace[1832892770] 'applied index is now lower than readState.Index'  (duration: 3.526µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T08:28:54.062631Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"239.14136ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/hello-node-75c85bcc94-76g5z.1881f3418981d978\" limit:1 ","response":"range_response_count:1 size:775"}
	{"level":"info","ts":"2025-12-17T08:28:54.062654Z","caller":"traceutil/trace.go:172","msg":"trace[1329154762] range","detail":"{range_begin:/registry/events/default/hello-node-75c85bcc94-76g5z.1881f3418981d978; range_end:; response_count:1; response_revision:1051; }","duration":"239.174481ms","start":"2025-12-17T08:28:53.823472Z","end":"2025-12-17T08:28:54.062646Z","steps":["trace[1329154762] 'agreement among raft nodes before linearized reading'  (duration: 239.049636ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:28:54.062668Z","caller":"traceutil/trace.go:172","msg":"trace[1395157181] transaction","detail":"{read_only:false; response_revision:1052; number_of_response:1; }","duration":"354.645285ms","start":"2025-12-17T08:28:53.708012Z","end":"2025-12-17T08:28:54.062658Z","steps":["trace[1395157181] 'process raft request'  (duration: 354.513528ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T08:28:54.062754Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T08:28:53.707996Z","time spent":"354.70909ms","remote":"127.0.0.1:35730","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1050 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-12-17T08:28:54.062947Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"237.097229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/hello-node-75c85bcc94-76g5z\" limit:1 ","response":"range_response_count:1 size:3273"}
	{"level":"info","ts":"2025-12-17T08:28:54.062995Z","caller":"traceutil/trace.go:172","msg":"trace[784849255] range","detail":"{range_begin:/registry/pods/default/hello-node-75c85bcc94-76g5z; range_end:; response_count:1; response_revision:1052; }","duration":"237.145054ms","start":"2025-12-17T08:28:53.825841Z","end":"2025-12-17T08:28:54.062986Z","steps":["trace[784849255] 'agreement among raft nodes before linearized reading'  (duration: 237.031568ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:28:54.250981Z","caller":"traceutil/trace.go:172","msg":"trace[1309328681] linearizableReadLoop","detail":"{readStateIndex:1168; appliedIndex:1168; }","duration":"182.460576ms","start":"2025-12-17T08:28:54.068468Z","end":"2025-12-17T08:28:54.250928Z","steps":["trace[1309328681] 'read index received'  (duration: 182.454653ms)","trace[1309328681] 'applied index is now lower than readState.Index'  (duration: 5.293µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T08:28:54.257284Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"188.762292ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T08:28:54.257320Z","caller":"traceutil/trace.go:172","msg":"trace[625439943] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1052; }","duration":"188.846255ms","start":"2025-12-17T08:28:54.068465Z","end":"2025-12-17T08:28:54.257311Z","steps":["trace[625439943] 'agreement among raft nodes before linearized reading'  (duration: 182.59633ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:28:54.258040Z","caller":"traceutil/trace.go:172","msg":"trace[1866825117] transaction","detail":"{read_only:false; response_revision:1053; number_of_response:1; }","duration":"190.03831ms","start":"2025-12-17T08:28:54.067988Z","end":"2025-12-17T08:28:54.258027Z","steps":["trace[1866825117] 'process raft request'  (duration: 183.10723ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:28:54.258981Z","caller":"traceutil/trace.go:172","msg":"trace[1580778762] transaction","detail":"{read_only:false; response_revision:1054; number_of_response:1; }","duration":"184.319892ms","start":"2025-12-17T08:28:54.074603Z","end":"2025-12-17T08:28:54.258923Z","steps":["trace[1580778762] 'process raft request'  (duration: 183.036261ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:28:56.251532Z","caller":"traceutil/trace.go:172","msg":"trace[1846296219] transaction","detail":"{read_only:false; response_revision:1064; number_of_response:1; }","duration":"177.396117ms","start":"2025-12-17T08:28:56.074122Z","end":"2025-12-17T08:28:56.251518Z","steps":["trace[1846296219] 'process raft request'  (duration: 177.294056ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:29:00.663369Z","caller":"traceutil/trace.go:172","msg":"trace[1032096049] linearizableReadLoop","detail":"{readStateIndex:1184; appliedIndex:1184; }","duration":"186.148761ms","start":"2025-12-17T08:29:00.477206Z","end":"2025-12-17T08:29:00.663355Z","steps":["trace[1032096049] 'read index received'  (duration: 185.944377ms)","trace[1032096049] 'applied index is now lower than readState.Index'  (duration: 203.462µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T08:29:00.663674Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"186.41735ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T08:29:00.663715Z","caller":"traceutil/trace.go:172","msg":"trace[215806706] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1066; }","duration":"186.508168ms","start":"2025-12-17T08:29:00.477201Z","end":"2025-12-17T08:29:00.663709Z","steps":["trace[215806706] 'agreement among raft nodes before linearized reading'  (duration: 186.398701ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:29:00.664872Z","caller":"traceutil/trace.go:172","msg":"trace[491599279] transaction","detail":"{read_only:false; response_revision:1067; number_of_response:1; }","duration":"388.059975ms","start":"2025-12-17T08:29:00.276802Z","end":"2025-12-17T08:29:00.664862Z","steps":["trace[491599279] 'process raft request'  (duration: 387.242816ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T08:29:00.664955Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T08:29:00.276787Z","time spent":"388.126689ms","remote":"127.0.0.1:35730","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1065 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-12-17T08:35:06.045475Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1135}
	{"level":"info","ts":"2025-12-17T08:35:06.069772Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1135,"took":"23.353507ms","hash":2995589803,"current-db-size-bytes":3538944,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1703936,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-12-17T08:35:06.069813Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2995589803,"revision":1135,"compact-revision":-1}
	
	
	==> kernel <==
	 08:35:30 up 12 min,  0 users,  load average: 0.05, 0.20, 0.17
	Linux functional-122342 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 03:41:16 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [238d16a286b7c1e3f843ca9f49efa5e9c395f1d4f26eb3f2285acd0dbeacc6d0] <==
	I1217 08:25:07.386089       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 08:25:07.401170       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 08:25:07.894973       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 08:25:08.174715       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 08:25:09.063315       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 08:25:09.095941       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 08:25:09.123903       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 08:25:09.131123       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 08:25:10.832704       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 08:25:10.982464       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 08:25:11.030473       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 08:25:24.679049       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.243.214"}
	I1217 08:25:29.060295       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.184.126"}
	I1217 08:25:29.172335       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.238.194"}
	I1217 08:26:40.281622       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 08:26:40.592487       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.16.212"}
	I1217 08:26:40.610052       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.16.54"}
	E1217 08:26:41.114362       1 conn.go:339] Error on socket receive: read tcp 192.168.39.97:8441->192.168.39.1:33808: use of closed network connection
	E1217 08:26:48.081135       1 conn.go:339] Error on socket receive: read tcp 192.168.39.97:8441->192.168.39.1:47744: use of closed network connection
	I1217 08:26:55.947742       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.109.119.15"}
	E1217 08:29:01.166218       1 conn.go:339] Error on socket receive: read tcp 192.168.39.97:8441->192.168.39.1:39382: use of closed network connection
	E1217 08:29:02.572157       1 conn.go:339] Error on socket receive: read tcp 192.168.39.97:8441->192.168.39.1:39398: use of closed network connection
	E1217 08:29:04.132135       1 conn.go:339] Error on socket receive: read tcp 192.168.39.97:8441->192.168.39.1:39420: use of closed network connection
	E1217 08:29:07.155921       1 conn.go:339] Error on socket receive: read tcp 192.168.39.97:8441->192.168.39.1:39434: use of closed network connection
	I1217 08:35:07.306891       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [8690c53de65dfa345567ecd5227894357e1b82b998968d2fd4d632439c1e17e3] <==
	I1217 08:25:10.637349       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 08:25:10.637973       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 08:25:10.639729       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 08:25:10.647707       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 08:25:10.647775       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 08:25:10.647786       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 08:25:10.647791       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 08:25:10.655101       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 08:25:10.655161       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 08:25:10.655574       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 08:25:10.659343       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 08:25:10.669943       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 08:25:10.678670       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 08:25:10.680003       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 08:25:10.683337       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 08:25:10.683405       1 shared_informer.go:356] "Caches are synced" controller="service account"
	E1217 08:26:40.373827       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:26:40.384790       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:26:40.397861       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:26:40.402313       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:26:40.406452       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:26:40.406561       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:26:40.421002       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:26:40.430776       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:26:40.430927       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [b91e15e9406ddd182134e86ad3b00b12cb5607dc046505f7a127894a576e9419] <==
	I1217 08:24:26.572294       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 08:24:26.572430       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1217 08:24:26.573601       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 08:24:26.573682       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 08:24:26.573728       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 08:24:26.574700       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 08:24:26.574745       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 08:24:26.574899       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 08:24:26.574984       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 08:24:26.575056       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 08:24:26.576510       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 08:24:26.577734       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 08:24:26.597033       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 08:24:26.597079       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 08:24:26.597097       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 08:24:26.597102       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 08:24:26.597106       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 08:24:26.600496       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 08:24:26.604810       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 08:24:26.609869       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 08:24:26.609987       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1217 08:24:26.621912       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 08:24:26.621925       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 08:24:26.621928       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 08:24:26.621930       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [26dd379d79c851d78e49afda695d69aa1a1496132def151ccc64aca74a55ba04] <==
	I1217 08:24:24.306948       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 08:24:24.408753       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 08:24:24.408959       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.97"]
	E1217 08:24:24.409406       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:24:24.472565       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 08:24:24.473136       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 08:24:24.473625       1 server_linux.go:132] "Using iptables Proxier"
	I1217 08:24:24.489420       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:24:24.490375       1 server.go:527] "Version info" version="v1.34.3"
	I1217 08:24:24.490835       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:24:24.495916       1 config.go:200] "Starting service config controller"
	I1217 08:24:24.495929       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:24:24.495941       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:24:24.495944       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:24:24.495953       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:24:24.495957       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:24:24.498702       1 config.go:309] "Starting node config controller"
	I1217 08:24:24.499760       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:24:24.499924       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 08:24:24.596845       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 08:24:24.596895       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 08:24:24.597330       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [3d925826c8f6f8fbf3c96ac2c39bef90c2c64073d2bb8523c2026f14a05cad7e] <==
	I1217 08:25:08.849135       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 08:25:08.950442       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 08:25:08.950565       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.97"]
	E1217 08:25:08.950716       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:25:08.992419       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 08:25:08.992478       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 08:25:08.992505       1 server_linux.go:132] "Using iptables Proxier"
	I1217 08:25:09.005187       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:25:09.005510       1 server.go:527] "Version info" version="v1.34.3"
	I1217 08:25:09.005536       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:25:09.010005       1 config.go:200] "Starting service config controller"
	I1217 08:25:09.010037       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:25:09.010055       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:25:09.010059       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:25:09.010077       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:25:09.010081       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:25:09.010846       1 config.go:309] "Starting node config controller"
	I1217 08:25:09.010853       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:25:09.010858       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 08:25:09.110469       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 08:25:09.110511       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 08:25:09.111175       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8e5b186eccdc4e422c6f76bbba5bc0ef1eb32b926601973fa064f3bcadfc7b7c] <==
	I1217 08:24:21.380666       1 serving.go:386] Generated self-signed cert in-memory
	I1217 08:24:23.268738       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 08:24:23.268799       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:24:23.273765       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 08:24:23.274282       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1217 08:24:23.274346       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1217 08:24:23.274390       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:24:23.274410       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:24:23.274438       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 08:24:23.274454       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 08:24:23.274349       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 08:24:23.376202       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1217 08:24:23.376326       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:24:23.376335       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 08:24:46.375307       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1217 08:24:46.375436       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1217 08:24:46.375449       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1217 08:24:46.375477       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 08:24:46.375536       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:24:46.375557       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1217 08:24:46.375741       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1217 08:24:46.381328       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a1ec936640a50471bba654dfa6356efdd70d3a5729b21c3a8f12de69136db7cf] <==
	I1217 08:25:05.215724       1 serving.go:386] Generated self-signed cert in-memory
	W1217 08:25:07.249449       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 08:25:07.249488       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 08:25:07.249499       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 08:25:07.249506       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 08:25:07.315539       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 08:25:07.315574       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:25:07.321072       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 08:25:07.321202       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:25:07.321215       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:25:07.321269       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 08:25:07.421806       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 08:34:53 functional-122342 kubelet[5610]: E1217 08:34:53.367842    5610 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765960493367479897  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243805}  inodes_used:{value:113}}"
	Dec 17 08:34:53 functional-122342 kubelet[5610]: E1217 08:34:53.367863    5610 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765960493367479897  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243805}  inodes_used:{value:113}}"
	Dec 17 08:34:53 functional-122342 kubelet[5610]: E1217 08:34:53.821037    5610 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-76g5z" podUID="61c3ac51-6a79-43ce-b0ec-bd6c030bf75b"
	Dec 17 08:34:54 functional-122342 kubelet[5610]: E1217 08:34:54.821840    5610 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-b8nbt" podUID="a03a3faa-9feb-43f7-86f2-a398df95eddd"
	Dec 17 08:35:02 functional-122342 kubelet[5610]: E1217 08:35:02.934478    5610 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod3e2d39ca3e768afa6c2876cc33ec430d/crio-65d7100394df905a2f9450b95c42db3ef1ee66205b82099018b319f4a048e21d: Error finding container 65d7100394df905a2f9450b95c42db3ef1ee66205b82099018b319f4a048e21d: Status 404 returned error can't find the container with id 65d7100394df905a2f9450b95c42db3ef1ee66205b82099018b319f4a048e21d
	Dec 17 08:35:02 functional-122342 kubelet[5610]: E1217 08:35:02.935094    5610 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod08fa188e-90ff-4dfe-86df-b40eef36765d/crio-546f8fb776a5c3846f1370782db9e94fd5fcd478121542ff2d33b26798b43225: Error finding container 546f8fb776a5c3846f1370782db9e94fd5fcd478121542ff2d33b26798b43225: Status 404 returned error can't find the container with id 546f8fb776a5c3846f1370782db9e94fd5fcd478121542ff2d33b26798b43225
	Dec 17 08:35:02 functional-122342 kubelet[5610]: E1217 08:35:02.935419    5610 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod8ce8ca7de0471d348f97a4fbf14f4cf4/crio-34d0183982ab8397230f2901ed1e419778a678d9a64e966f6f60c06d4840cb51: Error finding container 34d0183982ab8397230f2901ed1e419778a678d9a64e966f6f60c06d4840cb51: Status 404 returned error can't find the container with id 34d0183982ab8397230f2901ed1e419778a678d9a64e966f6f60c06d4840cb51
	Dec 17 08:35:02 functional-122342 kubelet[5610]: E1217 08:35:02.935703    5610 manager.go:1116] Failed to create existing container: /kubepods/burstable/podba342c5421c70f9a936a60f9dc9b0678/crio-ff0e203b9a657d1d2b8e9034a0d8f8948a408b2d463c68ba29cacc577e6b0184: Error finding container ff0e203b9a657d1d2b8e9034a0d8f8948a408b2d463c68ba29cacc577e6b0184: Status 404 returned error can't find the container with id ff0e203b9a657d1d2b8e9034a0d8f8948a408b2d463c68ba29cacc577e6b0184
	Dec 17 08:35:02 functional-122342 kubelet[5610]: E1217 08:35:02.936048    5610 manager.go:1116] Failed to create existing container: /kubepods/besteffort/poddaf12b32-6915-43d9-b1b0-c897d53bca11/crio-d57f6fa06bf8adabfb271001e8803287eb8ffadb89098032e8d2a68c25ee6f1a: Error finding container d57f6fa06bf8adabfb271001e8803287eb8ffadb89098032e8d2a68c25ee6f1a: Status 404 returned error can't find the container with id d57f6fa06bf8adabfb271001e8803287eb8ffadb89098032e8d2a68c25ee6f1a
	Dec 17 08:35:02 functional-122342 kubelet[5610]: E1217 08:35:02.936408    5610 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9/crio-37756fdf1a3717047c0fd184ce9a77415ea73d8ba538e154786cde5d6f4806c8: Error finding container 37756fdf1a3717047c0fd184ce9a77415ea73d8ba538e154786cde5d6f4806c8: Status 404 returned error can't find the container with id 37756fdf1a3717047c0fd184ce9a77415ea73d8ba538e154786cde5d6f4806c8
	Dec 17 08:35:03 functional-122342 kubelet[5610]: E1217 08:35:03.370837    5610 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765960503370392493  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243805}  inodes_used:{value:113}}"
	Dec 17 08:35:03 functional-122342 kubelet[5610]: E1217 08:35:03.370876    5610 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765960503370392493  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243805}  inodes_used:{value:113}}"
	Dec 17 08:35:06 functional-122342 kubelet[5610]: E1217 08:35:06.823897    5610 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wzrhk" podUID="a3799b22-1e17-4740-b6ba-7efe7afa6d21"
	Dec 17 08:35:08 functional-122342 kubelet[5610]: E1217 08:35:08.821854    5610 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-b8nbt" podUID="a03a3faa-9feb-43f7-86f2-a398df95eddd"
	Dec 17 08:35:10 functional-122342 kubelet[5610]: E1217 08:35:10.771909    5610 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 17 08:35:10 functional-122342 kubelet[5610]: E1217 08:35:10.772002    5610 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 17 08:35:10 functional-122342 kubelet[5610]: E1217 08:35:10.772207    5610 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-xb94z_kubernetes-dashboard(4090411a-f6a9-4368-b45c-9ebe9d0f52a6): ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 17 08:35:10 functional-122342 kubelet[5610]: E1217 08:35:10.772315    5610 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-xb94z" podUID="4090411a-f6a9-4368-b45c-9ebe9d0f52a6"
	Dec 17 08:35:13 functional-122342 kubelet[5610]: E1217 08:35:13.372776    5610 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765960513372447700  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243805}  inodes_used:{value:113}}"
	Dec 17 08:35:13 functional-122342 kubelet[5610]: E1217 08:35:13.372820    5610 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765960513372447700  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243805}  inodes_used:{value:113}}"
	Dec 17 08:35:21 functional-122342 kubelet[5610]: E1217 08:35:21.822873    5610 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wzrhk" podUID="a3799b22-1e17-4740-b6ba-7efe7afa6d21"
	Dec 17 08:35:22 functional-122342 kubelet[5610]: E1217 08:35:22.823128    5610 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-b8nbt" podUID="a03a3faa-9feb-43f7-86f2-a398df95eddd"
	Dec 17 08:35:23 functional-122342 kubelet[5610]: E1217 08:35:23.374767    5610 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765960523374440606  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243805}  inodes_used:{value:113}}"
	Dec 17 08:35:23 functional-122342 kubelet[5610]: E1217 08:35:23.374788    5610 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765960523374440606  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:243805}  inodes_used:{value:113}}"
	Dec 17 08:35:23 functional-122342 kubelet[5610]: E1217 08:35:23.823330    5610 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-xb94z" podUID="4090411a-f6a9-4368-b45c-9ebe9d0f52a6"
	
	
	==> storage-provisioner [3dbb693f0f59248e6ead291b0e92aba3429534efec95a15bc4cda79405de7355] <==
	W1217 08:35:06.535443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:08.539122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:08.547606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:10.551393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:10.557204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:12.560587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:12.566341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:14.570088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:14.578227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:16.581362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:16.587358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:18.591173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:18.595952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:20.599448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:20.604567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:22.609336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:22.616911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:24.620808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:24.625698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:26.629704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:26.636425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:28.639589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:28.647223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:30.653832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:35:30.660691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dba1678e8c75f270f557406de1ea4d54c058405d51078888b5a24b251f0680b2] <==
	I1217 08:24:24.152013       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 08:24:24.184290       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 08:24:24.184528       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 08:24:24.190203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:24:27.645013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:24:31.904931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:24:35.505512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:24:38.559669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:24:41.581804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:24:41.593137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:24:41.593895       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 08:24:41.594111       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-122342_d356f3bd-f6df-4a27-b67c-4da63394224b!
	I1217 08:24:41.595849       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"159536d6-e36d-4069-a75a-1c5c38b11b6e", APIVersion:"v1", ResourceVersion:"546", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-122342_d356f3bd-f6df-4a27-b67c-4da63394224b became leader
	W1217 08:24:41.598733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:24:41.607715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:24:41.695516       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-122342_d356f3bd-f6df-4a27-b67c-4da63394224b!
	W1217 08:24:43.611819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:24:43.616110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:24:45.619287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:24:45.624300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-122342 -n functional-122342
helpers_test.go:270: (dbg) Run:  kubectl --context functional-122342 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-75c85bcc94-76g5z hello-node-connect-7d85dfc575-b8nbt dashboard-metrics-scraper-77bf4d6c4c-xb94z kubernetes-dashboard-855c9754f9-wzrhk
helpers_test.go:283: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-122342 describe pod busybox-mount hello-node-75c85bcc94-76g5z hello-node-connect-7d85dfc575-b8nbt dashboard-metrics-scraper-77bf4d6c4c-xb94z kubernetes-dashboard-855c9754f9-wzrhk
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-122342 describe pod busybox-mount hello-node-75c85bcc94-76g5z hello-node-connect-7d85dfc575-b8nbt dashboard-metrics-scraper-77bf4d6c4c-xb94z kubernetes-dashboard-855c9754f9-wzrhk: exit status 1 (87.468676ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-122342/192.168.39.97
	Start Time:       Wed, 17 Dec 2025 08:25:31 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://78c0b6faa76255f96dd62af5fd66262225c7dd99481de1050acd67d212d3a5cb
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 17 Dec 2025 08:26:32 +0000
	      Finished:     Wed, 17 Dec 2025 08:26:32 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-69vrq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-69vrq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  10m    default-scheduler  Successfully assigned default/busybox-mount to functional-122342
	  Normal  Pulling    9m59s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     8m59s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.023s (1m0.315s including waiting). Image size: 4631262 bytes.
	  Normal  Created    8m59s  kubelet            Created container: mount-munger
	  Normal  Started    8m59s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-76g5z
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-122342/192.168.39.97
	Start Time:       Wed, 17 Dec 2025 08:25:29 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bv45r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bv45r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  10m                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-76g5z to functional-122342
	  Warning  Failed     115s (x4 over 9m)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     115s (x4 over 9m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    38s (x11 over 9m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     38s (x11 over 9m)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    23s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-b8nbt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-122342/192.168.39.97
	Start Time:       Wed, 17 Dec 2025 08:25:29 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8zczn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8zczn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-b8nbt to functional-122342
	  Warning  Failed     9m31s                kubelet            Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m36s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     84s (x5 over 9m31s)  kubelet            Error: ErrImagePull
	  Warning  Failed     84s (x4 over 8m25s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    9s (x15 over 9m30s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     9s (x15 over 9m30s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-xb94z" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wzrhk" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-122342 describe pod busybox-mount hello-node-75c85bcc94-76g5z hello-node-connect-7d85dfc575-b8nbt dashboard-metrics-scraper-77bf4d6c4c-xb94z kubernetes-dashboard-855c9754f9-wzrhk: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-122342 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-122342 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-76g5z" [61c3ac51-6a79-43ce-b0ec-bd6c030bf75b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1217 08:25:29.229769  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-122342 -n functional-122342
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-17 08:35:29.428007744 +0000 UTC m=+1204.163205083
functional_test.go:1460: (dbg) Run:  kubectl --context functional-122342 describe po hello-node-75c85bcc94-76g5z -n default
functional_test.go:1460: (dbg) kubectl --context functional-122342 describe po hello-node-75c85bcc94-76g5z -n default:
Name:             hello-node-75c85bcc94-76g5z
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-122342/192.168.39.97
Start Time:       Wed, 17 Dec 2025 08:25:29 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bv45r (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-bv45r:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-76g5z to functional-122342
Warning  Failed     113s (x4 over 8m58s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     113s (x4 over 8m58s)  kubelet            Error: ErrImagePull
Normal   BackOff    36s (x11 over 8m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     36s (x11 over 8m58s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    21s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-122342 logs hello-node-75c85bcc94-76g5z -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-122342 logs hello-node-75c85bcc94-76g5z -n default: exit status 1 (72.819746ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-76g5z" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-122342 logs hello-node-75c85bcc94-76g5z -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-122342 service --namespace=default --https --url hello-node: exit status 115 (240.616314ms)

                                                
                                                
-- stdout --
	https://192.168.39.97:32065
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-122342 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-122342 service hello-node --url --format={{.IP}}: exit status 115 (243.655436ms)

                                                
                                                
-- stdout --
	192.168.39.97
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-122342 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-122342 service hello-node --url: exit status 115 (243.70363ms)

                                                
                                                
-- stdout --
	http://192.168.39.97:32065
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-122342 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.97:32065
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (3.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-452472 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-452472 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-452472 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-452472 --alsologtostderr -v=1] stderr:
I1217 08:40:39.007342  909354 out.go:360] Setting OutFile to fd 1 ...
I1217 08:40:39.007460  909354 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:40:39.007469  909354 out.go:374] Setting ErrFile to fd 2...
I1217 08:40:39.007473  909354 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:40:39.007704  909354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
I1217 08:40:39.007954  909354 mustload.go:66] Loading cluster: functional-452472
I1217 08:40:39.008346  909354 config.go:182] Loaded profile config "functional-452472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:40:39.010664  909354 host.go:66] Checking if "functional-452472" exists ...
I1217 08:40:39.010851  909354 api_server.go:166] Checking apiserver status ...
I1217 08:40:39.010925  909354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1217 08:40:39.013098  909354 main.go:143] libmachine: domain functional-452472 has defined MAC address 52:54:00:92:6e:5e in network mk-functional-452472
I1217 08:40:39.013459  909354 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:92:6e:5e", ip: ""} in network mk-functional-452472: {Iface:virbr1 ExpiryTime:2025-12-17 09:35:51 +0000 UTC Type:0 Mac:52:54:00:92:6e:5e Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:functional-452472 Clientid:01:52:54:00:92:6e:5e}
I1217 08:40:39.013484  909354 main.go:143] libmachine: domain functional-452472 has defined IP address 192.168.39.226 and MAC address 52:54:00:92:6e:5e in network mk-functional-452472
I1217 08:40:39.013711  909354 sshutil.go:56] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/functional-452472/id_rsa Username:docker}
I1217 08:40:39.114762  909354 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7616/cgroup
W1217 08:40:39.128364  909354 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7616/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1217 08:40:39.128420  909354 ssh_runner.go:195] Run: ls
I1217 08:40:39.136065  909354 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8441/healthz ...
I1217 08:40:39.141907  909354 api_server.go:279] https://192.168.39.226:8441/healthz returned 200:
ok
W1217 08:40:39.141959  909354 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1217 08:40:39.142120  909354 config.go:182] Loaded profile config "functional-452472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:40:39.142135  909354 addons.go:70] Setting dashboard=true in profile "functional-452472"
I1217 08:40:39.142145  909354 addons.go:239] Setting addon dashboard=true in "functional-452472"
I1217 08:40:39.142185  909354 host.go:66] Checking if "functional-452472" exists ...
I1217 08:40:39.145135  909354 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1217 08:40:39.146328  909354 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1217 08:40:39.147501  909354 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1217 08:40:39.147526  909354 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1217 08:40:39.149395  909354 main.go:143] libmachine: domain functional-452472 has defined MAC address 52:54:00:92:6e:5e in network mk-functional-452472
I1217 08:40:39.149716  909354 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:92:6e:5e", ip: ""} in network mk-functional-452472: {Iface:virbr1 ExpiryTime:2025-12-17 09:35:51 +0000 UTC Type:0 Mac:52:54:00:92:6e:5e Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:functional-452472 Clientid:01:52:54:00:92:6e:5e}
I1217 08:40:39.149745  909354 main.go:143] libmachine: domain functional-452472 has defined IP address 192.168.39.226 and MAC address 52:54:00:92:6e:5e in network mk-functional-452472
I1217 08:40:39.149881  909354 sshutil.go:56] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/functional-452472/id_rsa Username:docker}
I1217 08:40:39.246014  909354 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1217 08:40:39.246039  909354 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1217 08:40:39.267528  909354 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1217 08:40:39.267564  909354 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1217 08:40:39.287497  909354 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1217 08:40:39.287536  909354 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1217 08:40:39.307918  909354 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1217 08:40:39.307938  909354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1217 08:40:39.328198  909354 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1217 08:40:39.328218  909354 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1217 08:40:39.347424  909354 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1217 08:40:39.347444  909354 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1217 08:40:39.370603  909354 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1217 08:40:39.370639  909354 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1217 08:40:39.391624  909354 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1217 08:40:39.391652  909354 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1217 08:40:39.414495  909354 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1217 08:40:39.414534  909354 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1217 08:40:39.437908  909354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1217 08:40:40.095287  909354 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-452472 addons enable metrics-server

                                                
                                                
I1217 08:40:40.096530  909354 addons.go:202] Writing out "functional-452472" config to set dashboard=true...
W1217 08:40:40.096800  909354 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1217 08:40:40.097479  909354 kapi.go:59] client config for functional-452472: &rest.Config{Host:"https://192.168.39.226:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.key", CAFile:"/home/jenkins/minikube-integration/22182-893359/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2818a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1217 08:40:40.097992  909354 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1217 08:40:40.098012  909354 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1217 08:40:40.098018  909354 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1217 08:40:40.098022  909354 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1217 08:40:40.098027  909354 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1217 08:40:40.108760  909354 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  8f7b9ef7-2833-4e62-9fa7-9272ab67cccf 893 0 2025-12-17 08:40:40 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-17 08:40:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.97.57.122,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.97.57.122],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1217 08:40:40.108934  909354 out.go:285] * Launching proxy ...
* Launching proxy ...
I1217 08:40:40.109023  909354 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-452472 proxy --port 36195]
I1217 08:40:40.109435  909354 dashboard.go:159] Waiting for kubectl to output host:port ...
I1217 08:40:40.156122  909354 out.go:203] 
W1217 08:40:40.157321  909354 out.go:285] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W1217 08:40:40.157339  909354 out.go:285] * 
* 
W1217 08:40:40.162162  909354 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1217 08:40:40.163484  909354 out.go:203] 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-452472 -n functional-452472
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-452472 logs -n 25: (1.382205392s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-452472 ssh sudo systemctl is-active containerd                                                                                                    │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │                     │
	│ license   │                                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ ssh       │ functional-452472 ssh sudo cat /etc/ssl/certs/897277.pem                                                                                                     │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ ssh       │ functional-452472 ssh sudo cat /usr/share/ca-certificates/897277.pem                                                                                         │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ ssh       │ functional-452472 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                     │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ ssh       │ functional-452472 ssh sudo cat /etc/ssl/certs/8972772.pem                                                                                                    │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ ssh       │ functional-452472 ssh sudo cat /usr/share/ca-certificates/8972772.pem                                                                                        │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ ssh       │ functional-452472 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                     │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image     │ functional-452472 image load --daemon kicbase/echo-server:functional-452472 --alsologtostderr                                                                │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image     │ functional-452472 image ls                                                                                                                                   │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image     │ functional-452472 image load --daemon kicbase/echo-server:functional-452472 --alsologtostderr                                                                │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image     │ functional-452472 image ls                                                                                                                                   │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image     │ functional-452472 image load --daemon kicbase/echo-server:functional-452472 --alsologtostderr                                                                │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image     │ functional-452472 image ls                                                                                                                                   │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image     │ functional-452472 image save kicbase/echo-server:functional-452472 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image     │ functional-452472 image rm kicbase/echo-server:functional-452472 --alsologtostderr                                                                           │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image     │ functional-452472 image ls                                                                                                                                   │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image     │ functional-452472 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image     │ functional-452472 image ls                                                                                                                                   │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image     │ functional-452472 image save --daemon kicbase/echo-server:functional-452472 --alsologtostderr                                                                │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ start     │ -p functional-452472 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                    │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │                     │
	│ start     │ -p functional-452472 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                              │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │                     │
	│ ssh       │ functional-452472 ssh sudo cat /etc/test/nested/copy/897277/hosts                                                                                            │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ start     │ -p functional-452472 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                    │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-452472 --alsologtostderr -v=1                                                                                               │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │                     │
	└───────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:40:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:40:33.874428  909302 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:40:33.874739  909302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:40:33.874750  909302 out.go:374] Setting ErrFile to fd 2...
	I1217 08:40:33.874755  909302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:40:33.875044  909302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 08:40:33.875537  909302 out.go:368] Setting JSON to false
	I1217 08:40:33.876605  909302 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12180,"bootTime":1765948654,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:40:33.876662  909302 start.go:143] virtualization: kvm guest
	I1217 08:40:33.878686  909302 out.go:179] * [functional-452472] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 08:40:33.880166  909302 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:40:33.880164  909302 notify.go:221] Checking for updates...
	I1217 08:40:33.881527  909302 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:40:33.882780  909302 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	I1217 08:40:33.884018  909302 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 08:40:33.885026  909302 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:40:33.886072  909302 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:40:33.887822  909302 config.go:182] Loaded profile config "functional-452472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:40:33.888499  909302 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:40:33.919391  909302 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1217 08:40:33.920611  909302 start.go:309] selected driver: kvm2
	I1217 08:40:33.920623  909302 start.go:927] validating driver "kvm2" against &{Name:functional-452472 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-452472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.226 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:40:33.920711  909302 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:40:33.922412  909302 out.go:203] 
	W1217 08:40:33.923375  909302 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 08:40:33.924380  909302 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 08:40:40 functional-452472 crio[6904]: time="2025-12-17 08:40:40.990438510Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765960840990364005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:195886,},InodesUsed:&UInt64Value{Value:92,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0336ff7d-a2c6-478b-8346-42547ac7f544 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:40:40 functional-452472 crio[6904]: time="2025-12-17 08:40:40.991502750Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06423f63-ae4f-4c2b-8548-8b3f5b5bf155 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:40:40 functional-452472 crio[6904]: time="2025-12-17 08:40:40.991574640Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06423f63-ae4f-4c2b-8548-8b3f5b5bf155 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:40:40 functional-452472 crio[6904]: time="2025-12-17 08:40:40.991936667Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bedb4e1b318c719cfedaec8f80eb9bb307b355e8616bbcde9ca2b4fc3ced8ec0,PodSandboxId:e3660be0b8fbb99d1b99ebb1e270e6a59d8d4e81eb33571b0c50b9f3d52ab1e1,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765960833416433240,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cb0b118-a214-4183-94c8-217df6984d7e,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bafea12ac56a396c9e3c5b4985c434f4aa90ca96f2043cf0b42335c4ccfaee4,PodSandboxId:06869ac992b1b8111674755ef186965355b3cf5b8bd68a2ce6122312ee8e5839,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765960819787793061,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 096ac532-94e3-4c84-834b-a3749b9fc71c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f77421a9c08d419666dd2d4d752ec5754ea00f9553624bf9ea84af84ca1d62,PodSandboxId:0959434dec2e35f50499215c76be050fdacc1c2c181f2e36438c3f8067ac5ad1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765960737717717277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-tvjx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ccdb9e-3552-4d0c-aa79-209aa4bc384e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38390ce93a173e94bede6cf6421552defd4cc8b49d9d37f2ecf97671f75d6c2,PodSandboxId:64eb7bcbb251e4830e3d11c315e467160b0687269f60e88bd80fd4d006e0b482,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765960737650267658,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-7d764666f9-vlpbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5767dc2c-d2a7-40df-9980-cf0eb5099135,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a79860283a5d688bfe587f86250a792d36d69e5ac6ad4f9fb2f0d883b408cb,PodSandboxId:5af1f6c2a652c192a45c2815827f125736e7052adfbb728b910354f9c511741e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Imag
e:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1765960737135627122,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mlzkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006580b2-c5aa-46f2-a109-0b4e4293a31d,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae20198e715809c55c5aa1cc0b3381b754821cfe0b0c46ae53cae097d05216e,PodSandboxId:011e23e331c5febae5e463faffa959d50f8f082c6aa1d198ec0f0213b1f4d179,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765960737140440279,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc28e214-79dc-4410-9e19-5e01dc8c177e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e314a4cd7ebcb67b1f912124f948ae99758af84ebad9776b3b3602546d454b,PodSandboxId:c7036e926edab5c4d154a6569f41063e98c69df84cbac6722506810b7601fd63,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5032a56602e1b
9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1765960734512370039,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a64983b2a6e32426f2b85bf4025ab6,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab0b7679f87d69cdc32a244cea1ba39e59fdf91d2f53f6311b9b967026a439d,PodSandboxId:4b5459e69ba11517197d53fc6fffce57e
140a1d288aee3172076464d8d348d83,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1765960734492361795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c66f6f2e75aeefdfac5925984824a19,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afae314e118b19ff9c06569c5a14f8f91
ab46fec1e8130e020e703f720115437,PodSandboxId:22bb51b96af29060aee8fa8f0991b7aa52bfa9569e427f40f109496493cb7526,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1765960734486910864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f5a1f5e0c67075627ce1766978c877,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bbc4201b4702bae790066874d19fed5d287ad766c42b9fa697cc27070c56ab9,PodSandboxId:18a61a19556c83b9cc2d82630bc4a99969d429742da752a4b2f76392ba842128,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1765960734446267806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d663db3988802dd0b7f3a700e2703644,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb75140ee1acd6e5714883d02293293abb7231572ea9276571847bb58080c36e,PodSandboxId:69fa824c620eb19adc3f011c88a7a2667aaf4b0454d144f44b6057ca46b8c0e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1765960693659832385,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mlzkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006580b2-c5aa-46f2-a109-0b4e4293a31d,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e85c94f90b224595b7f5063268abd750dfae8b503a1bd967141af459bce0472d,PodSandboxId:71812c39917583f284a2a4ed5532a76e8241ef9f313532cbbb5102c9f47a0aa6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765960693676740655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vlpbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5767dc2c-d2a7-40df-9980-cf0eb5099135,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41c46ed24892773ba743ef8a2e7b0127cf5a0e070f13fa09b5e982909263045,PodSandboxId:f2694d814f15a1423255f88ffab3a14679cad607b610b3d46cae4f97aba353e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765960693644322289,Labels:map[string]string{io.kubernetes.container.
name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-tvjx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ccdb9e-3552-4d0c-aa79-209aa4bc384e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24a8c206284f995e1beab427472536e2d5ac218b538808a3a7eb1194ce0849f8,PodSandboxId:715e5eeee18dfc4b463d53d93a80cf8ab421378058d8c21e32071b228d5553f3,Metadata:&ContainerMetadata{Name:etcd,Attemp
t:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1765960691012318916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c66f6f2e75aeefdfac5925984824a19,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56fd93bc6d042ffc6d5e70b53079a47da3495248a5349c01d8f3a3ac359bf531,PodSandboxId:afad058a661b32de13f3638ac69bece9
0b9fa781f8c8dc6edb25b23ddb026fc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1765960691013585704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d663db3988802dd0b7f3a700e2703644,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9
aaa296c1ec268580ef35dfb6084b2d11765a0551fd3025f7cb535ed69d8c036,PodSandboxId:b8a40898664357842f4a01c490a316886b9502b89edcb571e646fed65e16d0b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1765960690993886417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a64983b2a6e32426f2b85bf4025ab6,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efc92cf4ba908fee23149245dbc756291b252ab89d06ae1cd620e3763c633f12,PodSandboxId:3a70ccdc3e38a0bd793053d58cf9bef42ae122e408cbe7c66674d9eadbdff820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765960574188270257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc28e214-79dc-4410-9e19-5e01dc8c177e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=06423f63-ae4f-4c2b-8548-8b3f5b5bf155 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:40:41 functional-452472 crio[6904]: time="2025-12-17 08:40:41.029227413Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe84b583-72f4-48af-86e0-c1c7e01e4a63 name=/runtime.v1.RuntimeService/Version
	Dec 17 08:40:41 functional-452472 crio[6904]: time="2025-12-17 08:40:41.029671206Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe84b583-72f4-48af-86e0-c1c7e01e4a63 name=/runtime.v1.RuntimeService/Version
	Dec 17 08:40:41 functional-452472 crio[6904]: time="2025-12-17 08:40:41.031075267Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eed75404-8e68-4729-b368-3f08617b199c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:40:41 functional-452472 crio[6904]: time="2025-12-17 08:40:41.032337229Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765960841032315077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:195886,},InodesUsed:&UInt64Value{Value:92,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eed75404-8e68-4729-b368-3f08617b199c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:40:41 functional-452472 crio[6904]: time="2025-12-17 08:40:41.033935464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dae12fc3-53ba-4c9c-86c2-5adf3763578e name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:40:41 functional-452472 crio[6904]: time="2025-12-17 08:40:41.033988390Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dae12fc3-53ba-4c9c-86c2-5adf3763578e name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:40:41 functional-452472 crio[6904]: time="2025-12-17 08:40:41.034443737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bedb4e1b318c719cfedaec8f80eb9bb307b355e8616bbcde9ca2b4fc3ced8ec0,PodSandboxId:e3660be0b8fbb99d1b99ebb1e270e6a59d8d4e81eb33571b0c50b9f3d52ab1e1,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765960833416433240,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cb0b118-a214-4183-94c8-217df6984d7e,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bafea12ac56a396c9e3c5b4985c434f4aa90ca96f2043cf0b42335c4ccfaee4,PodSandboxId:06869ac992b1b8111674755ef186965355b3cf5b8bd68a2ce6122312ee8e5839,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765960819787793061,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 096ac532-94e3-4c84-834b-a3749b9fc71c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f77421a9c08d419666dd2d4d752ec5754ea00f9553624bf9ea84af84ca1d62,PodSandboxId:0959434dec2e35f50499215c76be050fdacc1c2c181f2e36438c3f8067ac5ad1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765960737717717277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-tvjx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ccdb9e-3552-4d0c-aa79-209aa4bc384e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38390ce93a173e94bede6cf6421552defd4cc8b49d9d37f2ecf97671f75d6c2,PodSandboxId:64eb7bcbb251e4830e3d11c315e467160b0687269f60e88bd80fd4d006e0b482,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765960737650267658,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-7d764666f9-vlpbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5767dc2c-d2a7-40df-9980-cf0eb5099135,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a79860283a5d688bfe587f86250a792d36d69e5ac6ad4f9fb2f0d883b408cb,PodSandboxId:5af1f6c2a652c192a45c2815827f125736e7052adfbb728b910354f9c511741e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Imag
e:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1765960737135627122,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mlzkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006580b2-c5aa-46f2-a109-0b4e4293a31d,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae20198e715809c55c5aa1cc0b3381b754821cfe0b0c46ae53cae097d05216e,PodSandboxId:011e23e331c5febae5e463faffa959d50f8f082c6aa1d198ec0f0213b1f4d179,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765960737140440279,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc28e214-79dc-4410-9e19-5e01dc8c177e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e314a4cd7ebcb67b1f912124f948ae99758af84ebad9776b3b3602546d454b,PodSandboxId:c7036e926edab5c4d154a6569f41063e98c69df84cbac6722506810b7601fd63,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5032a56602e1b
9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1765960734512370039,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a64983b2a6e32426f2b85bf4025ab6,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab0b7679f87d69cdc32a244cea1ba39e59fdf91d2f53f6311b9b967026a439d,PodSandboxId:4b5459e69ba11517197d53fc6fffce57e
140a1d288aee3172076464d8d348d83,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1765960734492361795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c66f6f2e75aeefdfac5925984824a19,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afae314e118b19ff9c06569c5a14f8f91
ab46fec1e8130e020e703f720115437,PodSandboxId:22bb51b96af29060aee8fa8f0991b7aa52bfa9569e427f40f109496493cb7526,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1765960734486910864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f5a1f5e0c67075627ce1766978c877,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bbc4201b4702bae790066874d19fed5d287ad766c42b9fa697cc27070c56ab9,PodSandboxId:18a61a19556c83b9cc2d82630bc4a99969d429742da752a4b2f76392ba842128,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1765960734446267806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d663db3988802dd0b7f3a700e2703644,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb75140ee1acd6e5714883d02293293abb7231572ea9276571847bb58080c36e,PodSandboxId:69fa824c620eb19adc3f011c88a7a2667aaf4b0454d144f44b6057ca46b8c0e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1765960693659832385,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mlzkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006580b2-c5aa-46f2-a109-0b4e4293a31d,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e85c94f90b224595b7f5063268abd750dfae8b503a1bd967141af459bce0472d,PodSandboxId:71812c39917583f284a2a4ed5532a76e8241ef9f313532cbbb5102c9f47a0aa6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765960693676740655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vlpbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5767dc2c-d2a7-40df-9980-cf0eb5099135,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41c46ed24892773ba743ef8a2e7b0127cf5a0e070f13fa09b5e982909263045,PodSandboxId:f2694d814f15a1423255f88ffab3a14679cad607b610b3d46cae4f97aba353e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765960693644322289,Labels:map[string]string{io.kubernetes.container.
name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-tvjx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ccdb9e-3552-4d0c-aa79-209aa4bc384e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24a8c206284f995e1beab427472536e2d5ac218b538808a3a7eb1194ce0849f8,PodSandboxId:715e5eeee18dfc4b463d53d93a80cf8ab421378058d8c21e32071b228d5553f3,Metadata:&ContainerMetadata{Name:etcd,Attemp
t:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1765960691012318916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c66f6f2e75aeefdfac5925984824a19,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56fd93bc6d042ffc6d5e70b53079a47da3495248a5349c01d8f3a3ac359bf531,PodSandboxId:afad058a661b32de13f3638ac69bece9
0b9fa781f8c8dc6edb25b23ddb026fc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1765960691013585704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d663db3988802dd0b7f3a700e2703644,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9
aaa296c1ec268580ef35dfb6084b2d11765a0551fd3025f7cb535ed69d8c036,PodSandboxId:b8a40898664357842f4a01c490a316886b9502b89edcb571e646fed65e16d0b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1765960690993886417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a64983b2a6e32426f2b85bf4025ab6,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efc92cf4ba908fee23149245dbc756291b252ab89d06ae1cd620e3763c633f12,PodSandboxId:3a70ccdc3e38a0bd793053d58cf9bef42ae122e408cbe7c66674d9eadbdff820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765960574188270257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc28e214-79dc-4410-9e19-5e01dc8c177e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dae12fc3-53ba-4c9c-86c2-5adf3763578e name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:40:41 functional-452472 crio[6904]: time="2025-12-17 08:40:41.063399084Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b427773e-a449-4133-bd7a-0487e2d80484 name=/runtime.v1.RuntimeService/Version
	Dec 17 08:40:41 functional-452472 crio[6904]: time="2025-12-17 08:40:41.063591815Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b427773e-a449-4133-bd7a-0487e2d80484 name=/runtime.v1.RuntimeService/Version
	Dec 17 08:40:41 functional-452472 crio[6904]: time="2025-12-17 08:40:41.064887620Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5bbee6af-d98f-4faf-8707-e7f3352f277c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:40:41 functional-452472 crio[6904]: time="2025-12-17 08:40:41.066116862Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765960841066092288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:195886,},InodesUsed:&UInt64Value{Value:92,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5bbee6af-d98f-4faf-8707-e7f3352f277c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:40:41 functional-452472 crio[6904]: time="2025-12-17 08:40:41.067210480Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29556b1d-3cc7-4c00-91d6-68c37902790e name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:40:41 functional-452472 crio[6904]: time="2025-12-17 08:40:41.067404462Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29556b1d-3cc7-4c00-91d6-68c37902790e name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:40:41 functional-452472 crio[6904]: time="2025-12-17 08:40:41.068347998Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bedb4e1b318c719cfedaec8f80eb9bb307b355e8616bbcde9ca2b4fc3ced8ec0,PodSandboxId:e3660be0b8fbb99d1b99ebb1e270e6a59d8d4e81eb33571b0c50b9f3d52ab1e1,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765960833416433240,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cb0b118-a214-4183-94c8-217df6984d7e,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bafea12ac56a396c9e3c5b4985c434f4aa90ca96f2043cf0b42335c4ccfaee4,PodSandboxId:06869ac992b1b8111674755ef186965355b3cf5b8bd68a2ce6122312ee8e5839,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765960819787793061,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 096ac532-94e3-4c84-834b-a3749b9fc71c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f77421a9c08d419666dd2d4d752ec5754ea00f9553624bf9ea84af84ca1d62,PodSandboxId:0959434dec2e35f50499215c76be050fdacc1c2c181f2e36438c3f8067ac5ad1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765960737717717277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-tvjx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ccdb9e-3552-4d0c-aa79-209aa4bc384e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38390ce93a173e94bede6cf6421552defd4cc8b49d9d37f2ecf97671f75d6c2,PodSandboxId:64eb7bcbb251e4830e3d11c315e467160b0687269f60e88bd80fd4d006e0b482,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765960737650267658,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-7d764666f9-vlpbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5767dc2c-d2a7-40df-9980-cf0eb5099135,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a79860283a5d688bfe587f86250a792d36d69e5ac6ad4f9fb2f0d883b408cb,PodSandboxId:5af1f6c2a652c192a45c2815827f125736e7052adfbb728b910354f9c511741e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Imag
e:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1765960737135627122,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mlzkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006580b2-c5aa-46f2-a109-0b4e4293a31d,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae20198e715809c55c5aa1cc0b3381b754821cfe0b0c46ae53cae097d05216e,PodSandboxId:011e23e331c5febae5e463faffa959d50f8f082c6aa1d198ec0f0213b1f4d179,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765960737140440279,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc28e214-79dc-4410-9e19-5e01dc8c177e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e314a4cd7ebcb67b1f912124f948ae99758af84ebad9776b3b3602546d454b,PodSandboxId:c7036e926edab5c4d154a6569f41063e98c69df84cbac6722506810b7601fd63,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5032a56602e1b
9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1765960734512370039,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a64983b2a6e32426f2b85bf4025ab6,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab0b7679f87d69cdc32a244cea1ba39e59fdf91d2f53f6311b9b967026a439d,PodSandboxId:4b5459e69ba11517197d53fc6fffce57e
140a1d288aee3172076464d8d348d83,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1765960734492361795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c66f6f2e75aeefdfac5925984824a19,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afae314e118b19ff9c06569c5a14f8f91
ab46fec1e8130e020e703f720115437,PodSandboxId:22bb51b96af29060aee8fa8f0991b7aa52bfa9569e427f40f109496493cb7526,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1765960734486910864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f5a1f5e0c67075627ce1766978c877,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bbc4201b4702bae790066874d19fed5d287ad766c42b9fa697cc27070c56ab9,PodSandboxId:18a61a19556c83b9cc2d82630bc4a99969d429742da752a4b2f76392ba842128,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1765960734446267806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d663db3988802dd0b7f3a700e2703644,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb75140ee1acd6e5714883d02293293abb7231572ea9276571847bb58080c36e,PodSandboxId:69fa824c620eb19adc3f011c88a7a2667aaf4b0454d144f44b6057ca46b8c0e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1765960693659832385,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mlzkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006580b2-c5aa-46f2-a109-0b4e4293a31d,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e85c94f90b224595b7f5063268abd750dfae8b503a1bd967141af459bce0472d,PodSandboxId:71812c39917583f284a2a4ed5532a76e8241ef9f313532cbbb5102c9f47a0aa6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765960693676740655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vlpbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5767dc2c-d2a7-40df-9980-cf0eb5099135,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41c46ed24892773ba743ef8a2e7b0127cf5a0e070f13fa09b5e982909263045,PodSandboxId:f2694d814f15a1423255f88ffab3a14679cad607b610b3d46cae4f97aba353e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765960693644322289,Labels:map[string]string{io.kubernetes.container.
name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-tvjx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ccdb9e-3552-4d0c-aa79-209aa4bc384e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24a8c206284f995e1beab427472536e2d5ac218b538808a3a7eb1194ce0849f8,PodSandboxId:715e5eeee18dfc4b463d53d93a80cf8ab421378058d8c21e32071b228d5553f3,Metadata:&ContainerMetadata{Name:etcd,Attemp
t:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1765960691012318916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c66f6f2e75aeefdfac5925984824a19,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56fd93bc6d042ffc6d5e70b53079a47da3495248a5349c01d8f3a3ac359bf531,PodSandboxId:afad058a661b32de13f3638ac69bece9
0b9fa781f8c8dc6edb25b23ddb026fc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1765960691013585704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d663db3988802dd0b7f3a700e2703644,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9
aaa296c1ec268580ef35dfb6084b2d11765a0551fd3025f7cb535ed69d8c036,PodSandboxId:b8a40898664357842f4a01c490a316886b9502b89edcb571e646fed65e16d0b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1765960690993886417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a64983b2a6e32426f2b85bf4025ab6,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efc92cf4ba908fee23149245dbc756291b252ab89d06ae1cd620e3763c633f12,PodSandboxId:3a70ccdc3e38a0bd793053d58cf9bef42ae122e408cbe7c66674d9eadbdff820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765960574188270257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc28e214-79dc-4410-9e19-5e01dc8c177e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29556b1d-3cc7-4c00-91d6-68c37902790e name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:40:41 functional-452472 crio[6904]: time="2025-12-17 08:40:41.103593468Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e60a82a-56fc-42e2-aa9d-fd94a4fc85c1 name=/runtime.v1.RuntimeService/Version
	Dec 17 08:40:41 functional-452472 crio[6904]: time="2025-12-17 08:40:41.103683527Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e60a82a-56fc-42e2-aa9d-fd94a4fc85c1 name=/runtime.v1.RuntimeService/Version
	Dec 17 08:40:41 functional-452472 crio[6904]: time="2025-12-17 08:40:41.104957778Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83ce0ede-c9f3-4056-a224-dc20c045e59c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:40:41 functional-452472 crio[6904]: time="2025-12-17 08:40:41.105994424Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765960841105967466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:195886,},InodesUsed:&UInt64Value{Value:92,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83ce0ede-c9f3-4056-a224-dc20c045e59c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:40:41 functional-452472 crio[6904]: time="2025-12-17 08:40:41.106843303Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=546ab354-e629-4536-8610-16d73d097a2b name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:40:41 functional-452472 crio[6904]: time="2025-12-17 08:40:41.106914378Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=546ab354-e629-4536-8610-16d73d097a2b name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:40:41 functional-452472 crio[6904]: time="2025-12-17 08:40:41.107427620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bedb4e1b318c719cfedaec8f80eb9bb307b355e8616bbcde9ca2b4fc3ced8ec0,PodSandboxId:e3660be0b8fbb99d1b99ebb1e270e6a59d8d4e81eb33571b0c50b9f3d52ab1e1,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765960833416433240,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cb0b118-a214-4183-94c8-217df6984d7e,},Annotations:map[string]string{io.kubernetes.container.hash: 8389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bafea12ac56a396c9e3c5b4985c434f4aa90ca96f2043cf0b42335c4ccfaee4,PodSandboxId:06869ac992b1b8111674755ef186965355b3cf5b8bd68a2ce6122312ee8e5839,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765960819787793061,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 096ac532-94e3-4c84-834b-a3749b9fc71c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f77421a9c08d419666dd2d4d752ec5754ea00f9553624bf9ea84af84ca1d62,PodSandboxId:0959434dec2e35f50499215c76be050fdacc1c2c181f2e36438c3f8067ac5ad1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765960737717717277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-tvjx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ccdb9e-3552-4d0c-aa79-209aa4bc384e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38390ce93a173e94bede6cf6421552defd4cc8b49d9d37f2ecf97671f75d6c2,PodSandboxId:64eb7bcbb251e4830e3d11c315e467160b0687269f60e88bd80fd4d006e0b482,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765960737650267658,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-7d764666f9-vlpbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5767dc2c-d2a7-40df-9980-cf0eb5099135,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a79860283a5d688bfe587f86250a792d36d69e5ac6ad4f9fb2f0d883b408cb,PodSandboxId:5af1f6c2a652c192a45c2815827f125736e7052adfbb728b910354f9c511741e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Imag
e:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1765960737135627122,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mlzkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006580b2-c5aa-46f2-a109-0b4e4293a31d,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae20198e715809c55c5aa1cc0b3381b754821cfe0b0c46ae53cae097d05216e,PodSandboxId:011e23e331c5febae5e463faffa959d50f8f082c6aa1d198ec0f0213b1f4d179,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765960737140440279,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc28e214-79dc-4410-9e19-5e01dc8c177e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e314a4cd7ebcb67b1f912124f948ae99758af84ebad9776b3b3602546d454b,PodSandboxId:c7036e926edab5c4d154a6569f41063e98c69df84cbac6722506810b7601fd63,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5032a56602e1b
9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1765960734512370039,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a64983b2a6e32426f2b85bf4025ab6,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab0b7679f87d69cdc32a244cea1ba39e59fdf91d2f53f6311b9b967026a439d,PodSandboxId:4b5459e69ba11517197d53fc6fffce57e
140a1d288aee3172076464d8d348d83,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1765960734492361795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c66f6f2e75aeefdfac5925984824a19,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afae314e118b19ff9c06569c5a14f8f91
ab46fec1e8130e020e703f720115437,PodSandboxId:22bb51b96af29060aee8fa8f0991b7aa52bfa9569e427f40f109496493cb7526,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1765960734486910864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f5a1f5e0c67075627ce1766978c877,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bbc4201b4702bae790066874d19fed5d287ad766c42b9fa697cc27070c56ab9,PodSandboxId:18a61a19556c83b9cc2d82630bc4a99969d429742da752a4b2f76392ba842128,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1765960734446267806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d663db3988802dd0b7f3a700e2703644,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb75140ee1acd6e5714883d02293293abb7231572ea9276571847bb58080c36e,PodSandboxId:69fa824c620eb19adc3f011c88a7a2667aaf4b0454d144f44b6057ca46b8c0e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1765960693659832385,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mlzkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006580b2-c5aa-46f2-a109-0b4e4293a31d,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e85c94f90b224595b7f5063268abd750dfae8b503a1bd967141af459bce0472d,PodSandboxId:71812c39917583f284a2a4ed5532a76e8241ef9f313532cbbb5102c9f47a0aa6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765960693676740655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vlpbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5767dc2c-d2a7-40df-9980-cf0eb5099135,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41c46ed24892773ba743ef8a2e7b0127cf5a0e070f13fa09b5e982909263045,PodSandboxId:f2694d814f15a1423255f88ffab3a14679cad607b610b3d46cae4f97aba353e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765960693644322289,Labels:map[string]string{io.kubernetes.container.
name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-tvjx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ccdb9e-3552-4d0c-aa79-209aa4bc384e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24a8c206284f995e1beab427472536e2d5ac218b538808a3a7eb1194ce0849f8,PodSandboxId:715e5eeee18dfc4b463d53d93a80cf8ab421378058d8c21e32071b228d5553f3,Metadata:&ContainerMetadata{Name:etcd,Attemp
t:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1765960691012318916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c66f6f2e75aeefdfac5925984824a19,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56fd93bc6d042ffc6d5e70b53079a47da3495248a5349c01d8f3a3ac359bf531,PodSandboxId:afad058a661b32de13f3638ac69bece9
0b9fa781f8c8dc6edb25b23ddb026fc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1765960691013585704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d663db3988802dd0b7f3a700e2703644,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9
aaa296c1ec268580ef35dfb6084b2d11765a0551fd3025f7cb535ed69d8c036,PodSandboxId:b8a40898664357842f4a01c490a316886b9502b89edcb571e646fed65e16d0b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1765960690993886417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a64983b2a6e32426f2b85bf4025ab6,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efc92cf4ba908fee23149245dbc756291b252ab89d06ae1cd620e3763c633f12,PodSandboxId:3a70ccdc3e38a0bd793053d58cf9bef42ae122e408cbe7c66674d9eadbdff820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765960574188270257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc28e214-79dc-4410-9e19-5e01dc8c177e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=546ab354-e629-4536-8610-16d73d097a2b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bedb4e1b318c7       a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c                                      7 seconds ago        Running             myfrontend                0                   e3660be0b8fbb       sp-pod                                      default
	1bafea12ac56a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 seconds ago       Exited              mount-munger              0                   06869ac992b1b       busybox-mount                               default
	89f77421a9c08       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      About a minute ago   Running             coredns                   3                   0959434dec2e3       coredns-7d764666f9-tvjx6                    kube-system
	a38390ce93a17       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      About a minute ago   Running             coredns                   3                   64eb7bcbb251e       coredns-7d764666f9-vlpbt                    kube-system
	1ae20198e7158       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       2                   011e23e331c5f       storage-provisioner                         kube-system
	66a79860283a5       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                      About a minute ago   Running             kube-proxy                3                   5af1f6c2a652c       kube-proxy-mlzkt                            kube-system
	d8e314a4cd7eb       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                      About a minute ago   Running             kube-controller-manager   3                   c7036e926edab       kube-controller-manager-functional-452472   kube-system
	eab0b7679f87d       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      About a minute ago   Running             etcd                      3                   4b5459e69ba11       etcd-functional-452472                      kube-system
	afae314e118b1       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                      About a minute ago   Running             kube-apiserver            0                   22bb51b96af29       kube-apiserver-functional-452472            kube-system
	4bbc4201b4702       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                      About a minute ago   Running             kube-scheduler            3                   18a61a19556c8       kube-scheduler-functional-452472            kube-system
	e85c94f90b224       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      2 minutes ago        Exited              coredns                   2                   71812c3991758       coredns-7d764666f9-vlpbt                    kube-system
	bb75140ee1acd       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                      2 minutes ago        Exited              kube-proxy                2                   69fa824c620eb       kube-proxy-mlzkt                            kube-system
	c41c46ed24892       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      2 minutes ago        Exited              coredns                   2                   f2694d814f15a       coredns-7d764666f9-tvjx6                    kube-system
	56fd93bc6d042       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                      2 minutes ago        Exited              kube-scheduler            2                   afad058a661b3       kube-scheduler-functional-452472            kube-system
	24a8c206284f9       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      2 minutes ago        Exited              etcd                      2                   715e5eeee18df       etcd-functional-452472                      kube-system
	9aaa296c1ec26       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                      2 minutes ago        Exited              kube-controller-manager   2                   b8a4089866435       kube-controller-manager-functional-452472   kube-system
	efc92cf4ba908       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago        Exited              storage-provisioner       1                   3a70ccdc3e38a       storage-provisioner                         kube-system
	
	
	==> coredns [89f77421a9c08d419666dd2d4d752ec5754ea00f9553624bf9ea84af84ca1d62] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:60301 - 38524 "HINFO IN 8798269233804889227.6321400277170428478. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.14000854s
	
	
	==> coredns [a38390ce93a173e94bede6cf6421552defd4cc8b49d9d37f2ecf97671f75d6c2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:56828 - 64275 "HINFO IN 4349222604740283638.6489643425740530321. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.070840364s
	
	
	==> coredns [c41c46ed24892773ba743ef8a2e7b0127cf5a0e070f13fa09b5e982909263045] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:39443 - 42903 "HINFO IN 8449925704322377890.3510736395015828867. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.060633654s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e85c94f90b224595b7f5063268abd750dfae8b503a1bd967141af459bce0472d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:36475 - 11326 "HINFO IN 106714535523831428.6237128226454131128. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.079172803s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-452472
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-452472
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=functional-452472
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_36_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:36:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-452472
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:40:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:40:28 +0000   Wed, 17 Dec 2025 08:36:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:40:28 +0000   Wed, 17 Dec 2025 08:36:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:40:28 +0000   Wed, 17 Dec 2025 08:36:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 08:40:28 +0000   Wed, 17 Dec 2025 08:36:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.226
	  Hostname:    functional-452472
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	System Info:
	  Machine ID:                 aca9beaca1a34ffc8821f5e0518ea0b2
	  System UUID:                aca9beac-a1a3-4ffc-8821-f5e0518ea0b2
	  Boot ID:                    e50d8a89-43d4-41a0-a2a5-d33d9ae2bdd7
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-92scj                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  default                     hello-node-connect-9f67c86d4-w5n8n            0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  default                     mysql-7d7b65bc95-tv9dk                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    7s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-7d764666f9-tvjx6                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m29s
	  kube-system                 coredns-7d764666f9-vlpbt                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m29s
	  kube-system                 etcd-functional-452472                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m34s
	  kube-system                 kube-apiserver-functional-452472              250m (12%)    0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-controller-manager-functional-452472     200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 kube-proxy-mlzkt                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-scheduler-functional-452472              100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-rm7wm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-4d69z          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (72%)  700m (35%)
	  memory             752Mi (19%)  1040Mi (26%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  4m30s  node-controller  Node functional-452472 event: Registered Node functional-452472 in Controller
	  Normal  RegisteredNode  2m25s  node-controller  Node functional-452472 event: Registered Node functional-452472 in Controller
	  Normal  RegisteredNode  102s   node-controller  Node functional-452472 event: Registered Node functional-452472 in Controller
	
	
	==> dmesg <==
	[  +0.000038] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007219] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.154209] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085003] kauditd_printk_skb: 1 callbacks suppressed
	[Dec17 08:36] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.141015] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.779333] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.183668] kauditd_printk_skb: 269 callbacks suppressed
	[Dec17 08:38] kauditd_printk_skb: 383 callbacks suppressed
	[  +2.029181] kauditd_printk_skb: 312 callbacks suppressed
	[  +5.150512] kauditd_printk_skb: 48 callbacks suppressed
	[  +9.057264] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.731630] kauditd_printk_skb: 12 callbacks suppressed
	[  +3.706233] kauditd_printk_skb: 246 callbacks suppressed
	[  +3.555815] kauditd_printk_skb: 183 callbacks suppressed
	[Dec17 08:39] kauditd_printk_skb: 203 callbacks suppressed
	[  +7.493186] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.000018] kauditd_printk_skb: 133 callbacks suppressed
	[  +0.000568] kauditd_printk_skb: 68 callbacks suppressed
	[ +24.958087] kauditd_printk_skb: 26 callbacks suppressed
	[Dec17 08:40] kauditd_printk_skb: 31 callbacks suppressed
	[  +6.308952] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.633276] kauditd_printk_skb: 109 callbacks suppressed
	
	
	==> etcd [24a8c206284f995e1beab427472536e2d5ac218b538808a3a7eb1194ce0849f8] <==
	{"level":"info","ts":"2025-12-17T08:38:11.672228Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:38:11.672326Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T08:38:11.672353Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T08:38:11.672992Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T08:38:11.675260Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T08:38:11.698902Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.226:2379"}
	{"level":"info","ts":"2025-12-17T08:38:11.703081Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-17T08:38:39.186890Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-17T08:38:39.187004Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-452472","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.226:2380"],"advertise-client-urls":["https://192.168.39.226:2379"]}
	{"level":"error","ts":"2025-12-17T08:38:39.187115Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T08:38:39.270248Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T08:38:39.271669Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-17T08:38:39.271720Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-17T08:38:39.271803Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9e3e2863ac888927","current-leader-member-id":"9e3e2863ac888927"}
	{"level":"warn","ts":"2025-12-17T08:38:39.271808Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T08:38:39.271849Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T08:38:39.271861Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-17T08:38:39.271876Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-17T08:38:39.271896Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.226:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T08:38:39.271909Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.226:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T08:38:39.271915Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.226:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T08:38:39.274852Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.226:2380"}
	{"level":"error","ts":"2025-12-17T08:38:39.274944Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.226:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T08:38:39.274964Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.226:2380"}
	{"level":"info","ts":"2025-12-17T08:38:39.274970Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-452472","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.226:2380"],"advertise-client-urls":["https://192.168.39.226:2379"]}
	
	
	==> etcd [eab0b7679f87d69cdc32a244cea1ba39e59fdf91d2f53f6311b9b967026a439d] <==
	{"level":"info","ts":"2025-12-17T08:38:54.928513Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"9e3e2863ac888927","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-12-17T08:38:54.928674Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-17T08:38:54.929146Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-17T08:38:54.929212Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-17T08:38:54.928791Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9e3e2863ac888927 switched to configuration voters=(11402595715110177063)"}
	{"level":"info","ts":"2025-12-17T08:38:54.929299Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"5e6abf1d35eec4c5","local-member-id":"9e3e2863ac888927","added-peer-id":"9e3e2863ac888927","added-peer-peer-urls":["https://192.168.39.226:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-17T08:38:54.929404Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"5e6abf1d35eec4c5","local-member-id":"9e3e2863ac888927","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-17T08:38:55.091031Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9e3e2863ac888927 is starting a new election at term 4"}
	{"level":"info","ts":"2025-12-17T08:38:55.091086Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9e3e2863ac888927 became pre-candidate at term 4"}
	{"level":"info","ts":"2025-12-17T08:38:55.091152Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9e3e2863ac888927 received MsgPreVoteResp from 9e3e2863ac888927 at term 4"}
	{"level":"info","ts":"2025-12-17T08:38:55.091203Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9e3e2863ac888927 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T08:38:55.091218Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9e3e2863ac888927 became candidate at term 5"}
	{"level":"info","ts":"2025-12-17T08:38:55.095354Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9e3e2863ac888927 received MsgVoteResp from 9e3e2863ac888927 at term 5"}
	{"level":"info","ts":"2025-12-17T08:38:55.096209Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9e3e2863ac888927 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T08:38:55.096259Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9e3e2863ac888927 became leader at term 5"}
	{"level":"info","ts":"2025-12-17T08:38:55.096279Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9e3e2863ac888927 elected leader 9e3e2863ac888927 at term 5"}
	{"level":"info","ts":"2025-12-17T08:38:55.098445Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:38:55.098449Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9e3e2863ac888927","local-member-attributes":"{Name:functional-452472 ClientURLs:[https://192.168.39.226:2379]}","cluster-id":"5e6abf1d35eec4c5","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T08:38:55.099835Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T08:38:55.104311Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T08:38:55.104352Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T08:38:55.104217Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:38:55.114345Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T08:38:55.129679Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-17T08:38:55.130618Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.226:2379"}
	
	
	==> kernel <==
	 08:40:41 up 5 min,  0 users,  load average: 1.46, 0.75, 0.32
	Linux functional-452472 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 03:41:16 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [afae314e118b19ff9c06569c5a14f8f91ab46fec1e8130e020e703f720115437] <==
	I1217 08:38:56.531595       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 08:38:56.532041       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 08:38:56.532093       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 08:38:56.536430       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 08:38:56.536638       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 08:38:56.536814       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1217 08:38:56.558381       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 08:38:56.640070       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 08:38:57.344110       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 08:38:58.358416       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 08:38:58.395793       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 08:38:58.427807       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 08:38:58.437323       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 08:39:00.019393       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 08:39:00.070586       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 08:39:00.119536       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 08:39:12.099203       1 alloc.go:329] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.47.213"}
	I1217 08:39:16.395273       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.116.62"}
	I1217 08:39:17.059860       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.43.167"}
	E1217 08:40:31.206675       1 conn.go:339] Error on socket receive: read tcp 192.168.39.226:8441->192.168.39.1:52190: use of closed network connection
	I1217 08:40:34.046283       1 alloc.go:329] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.6.220"}
	E1217 08:40:38.936021       1 conn.go:339] Error on socket receive: read tcp 192.168.39.226:8441->192.168.39.1:55152: use of closed network connection
	I1217 08:40:39.805642       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 08:40:40.059571       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.57.122"}
	I1217 08:40:40.078857       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.159.11"}
	
	
	==> kube-controller-manager [9aaa296c1ec268580ef35dfb6084b2d11765a0551fd3025f7cb535ed69d8c036] <==
	I1217 08:38:16.306809       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.306853       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.306898       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.306969       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307062       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307100       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307241       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307292       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307313       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307327       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307355       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307378       1 range_allocator.go:177] "Sending events to api server"
	I1217 08:38:16.307410       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1217 08:38:16.307414       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:38:16.307417       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307493       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307546       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307588       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307641       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.311902       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:38:16.332293       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.402703       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.402719       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 08:38:16.402723       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 08:38:16.412817       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [d8e314a4cd7ebcb67b1f912124f948ae99758af84ebad9776b3b3602546d454b] <==
	I1217 08:38:59.632912       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.633011       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.633107       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.633213       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.633260       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.633365       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.633455       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.629646       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.629654       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.634044       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.629747       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.635307       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.647256       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:38:59.675410       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.731265       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.731333       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 08:38:59.731354       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 08:38:59.747401       1 shared_informer.go:377] "Caches are synced"
	E1217 08:40:39.904682       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:40:39.907328       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:40:39.912630       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:40:39.921492       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:40:39.921881       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:40:39.934450       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:40:39.938224       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [66a79860283a5d688bfe587f86250a792d36d69e5ac6ad4f9fb2f0d883b408cb] <==
	I1217 08:38:57.928683       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:38:58.031590       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:58.031630       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.226"]
	E1217 08:38:58.031700       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:38:58.165581       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 08:38:58.166145       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 08:38:58.166295       1 server_linux.go:136] "Using iptables Proxier"
	I1217 08:38:58.184255       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:38:58.189234       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 08:38:58.189248       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:38:58.200254       1 config.go:200] "Starting service config controller"
	I1217 08:38:58.200265       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:38:58.200334       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:38:58.200339       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:38:58.200368       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:38:58.200373       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:38:58.204656       1 config.go:309] "Starting node config controller"
	I1217 08:38:58.204787       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:38:58.204796       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 08:38:58.301605       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 08:38:58.301644       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 08:38:58.301678       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [bb75140ee1acd6e5714883d02293293abb7231572ea9276571847bb58080c36e] <==
	I1217 08:38:13.970604       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:38:14.070869       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:14.071069       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.226"]
	E1217 08:38:14.071402       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:38:14.114493       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 08:38:14.114676       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 08:38:14.114754       1 server_linux.go:136] "Using iptables Proxier"
	I1217 08:38:14.126138       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:38:14.126444       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 08:38:14.126470       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:38:14.132669       1 config.go:200] "Starting service config controller"
	I1217 08:38:14.134078       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:38:14.134499       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:38:14.134510       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:38:14.134521       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:38:14.134524       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:38:14.135016       1 config.go:309] "Starting node config controller"
	I1217 08:38:14.135023       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:38:14.135080       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 08:38:14.235110       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 08:38:14.235274       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 08:38:14.234998       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4bbc4201b4702bae790066874d19fed5d287ad766c42b9fa697cc27070c56ab9] <==
	I1217 08:38:55.027638       1 serving.go:386] Generated self-signed cert in-memory
	W1217 08:38:56.408069       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 08:38:56.408238       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 08:38:56.408328       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 08:38:56.408351       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 08:38:56.454683       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1217 08:38:56.454824       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:38:56.459353       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 08:38:56.460003       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:38:56.460036       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:38:56.460052       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 08:38:56.560261       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [56fd93bc6d042ffc6d5e70b53079a47da3495248a5349c01d8f3a3ac359bf531] <==
	I1217 08:38:11.964464       1 serving.go:386] Generated self-signed cert in-memory
	W1217 08:38:13.111726       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 08:38:13.111820       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 08:38:13.111829       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 08:38:13.111891       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 08:38:13.166743       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1217 08:38:13.166776       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:38:13.180382       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 08:38:13.180543       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:38:13.180571       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:38:13.180591       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 08:38:13.281230       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:39.198775       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1217 08:38:39.198828       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1217 08:38:39.198853       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1217 08:38:39.198895       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:38:39.199270       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1217 08:38:39.199327       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 17 08:40:23 functional-452472 kubelet[7366]: E1217 08:40:23.730429    7366 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765960823727667630  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:166764}  inodes_used:{value:82}}"
	Dec 17 08:40:23 functional-452472 kubelet[7366]: E1217 08:40:23.730745    7366 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765960823727667630  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:166764}  inodes_used:{value:82}}"
	Dec 17 08:40:25 functional-452472 kubelet[7366]: E1217 08:40:25.574789    7366 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-452472" containerName="kube-controller-manager"
	Dec 17 08:40:30 functional-452472 kubelet[7366]: E1217 08:40:30.573582    7366 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-452472" containerName="etcd"
	Dec 17 08:40:30 functional-452472 kubelet[7366]: I1217 08:40:30.589031    7366 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=6.117274131 podStartE2EDuration="1m7.589013038s" podCreationTimestamp="2025-12-17 08:39:23 +0000 UTC" firstStartedPulling="2025-12-17 08:39:23.590140721 +0000 UTC m=+30.140190169" lastFinishedPulling="2025-12-17 08:40:25.061879642 +0000 UTC m=+91.611929076" observedRunningTime="2025-12-17 08:40:25.608577856 +0000 UTC m=+92.158627326" watchObservedRunningTime="2025-12-17 08:40:30.589013038 +0000 UTC m=+97.139062492"
	Dec 17 08:40:31 functional-452472 kubelet[7366]: I1217 08:40:31.811364    7366 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6c306e43-6951-420c-9bc5-841a32473efd-pvc-8baafaf8-f61e-4b54-9401-0f8dcb8191d5\" (UniqueName: \"kubernetes.io/host-path/6c306e43-6951-420c-9bc5-841a32473efd-pvc-8baafaf8-f61e-4b54-9401-0f8dcb8191d5\") pod \"6c306e43-6951-420c-9bc5-841a32473efd\" (UID: \"6c306e43-6951-420c-9bc5-841a32473efd\") "
	Dec 17 08:40:31 functional-452472 kubelet[7366]: I1217 08:40:31.811413    7366 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/6c306e43-6951-420c-9bc5-841a32473efd-kube-api-access-89fjh\" (UniqueName: \"kubernetes.io/projected/6c306e43-6951-420c-9bc5-841a32473efd-kube-api-access-89fjh\") pod \"6c306e43-6951-420c-9bc5-841a32473efd\" (UID: \"6c306e43-6951-420c-9bc5-841a32473efd\") "
	Dec 17 08:40:31 functional-452472 kubelet[7366]: I1217 08:40:31.811735    7366 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c306e43-6951-420c-9bc5-841a32473efd-pvc-8baafaf8-f61e-4b54-9401-0f8dcb8191d5" pod "6c306e43-6951-420c-9bc5-841a32473efd" (UID: "6c306e43-6951-420c-9bc5-841a32473efd"). InnerVolumeSpecName "pvc-8baafaf8-f61e-4b54-9401-0f8dcb8191d5". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 17 08:40:31 functional-452472 kubelet[7366]: I1217 08:40:31.814076    7366 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c306e43-6951-420c-9bc5-841a32473efd-kube-api-access-89fjh" pod "6c306e43-6951-420c-9bc5-841a32473efd" (UID: "6c306e43-6951-420c-9bc5-841a32473efd"). InnerVolumeSpecName "kube-api-access-89fjh". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 17 08:40:31 functional-452472 kubelet[7366]: I1217 08:40:31.911626    7366 reconciler_common.go:299] "Volume detached for volume \"pvc-8baafaf8-f61e-4b54-9401-0f8dcb8191d5\" (UniqueName: \"kubernetes.io/host-path/6c306e43-6951-420c-9bc5-841a32473efd-pvc-8baafaf8-f61e-4b54-9401-0f8dcb8191d5\") on node \"functional-452472\" DevicePath \"\""
	Dec 17 08:40:31 functional-452472 kubelet[7366]: I1217 08:40:31.911657    7366 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-89fjh\" (UniqueName: \"kubernetes.io/projected/6c306e43-6951-420c-9bc5-841a32473efd-kube-api-access-89fjh\") on node \"functional-452472\" DevicePath \"\""
	Dec 17 08:40:32 functional-452472 kubelet[7366]: I1217 08:40:32.652640    7366 scope.go:122] "RemoveContainer" containerID="1b4ab62f082b7adaa6758f9d7742eb96b11bdc95680e9b6d3af2e3ff6596d295"
	Dec 17 08:40:32 functional-452472 kubelet[7366]: I1217 08:40:32.919073    7366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grcdr\" (UniqueName: \"kubernetes.io/projected/0cb0b118-a214-4183-94c8-217df6984d7e-kube-api-access-grcdr\") pod \"sp-pod\" (UID: \"0cb0b118-a214-4183-94c8-217df6984d7e\") " pod="default/sp-pod"
	Dec 17 08:40:32 functional-452472 kubelet[7366]: I1217 08:40:32.919114    7366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8baafaf8-f61e-4b54-9401-0f8dcb8191d5\" (UniqueName: \"kubernetes.io/host-path/0cb0b118-a214-4183-94c8-217df6984d7e-pvc-8baafaf8-f61e-4b54-9401-0f8dcb8191d5\") pod \"sp-pod\" (UID: \"0cb0b118-a214-4183-94c8-217df6984d7e\") " pod="default/sp-pod"
	Dec 17 08:40:33 functional-452472 kubelet[7366]: I1217 08:40:33.579140    7366 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6c306e43-6951-420c-9bc5-841a32473efd" path="/var/lib/kubelet/pods/6c306e43-6951-420c-9bc5-841a32473efd/volumes"
	Dec 17 08:40:33 functional-452472 kubelet[7366]: E1217 08:40:33.733911    7366 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765960833732108499  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:195886}  inodes_used:{value:92}}"
	Dec 17 08:40:33 functional-452472 kubelet[7366]: E1217 08:40:33.733965    7366 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765960833732108499  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:195886}  inodes_used:{value:92}}"
	Dec 17 08:40:34 functional-452472 kubelet[7366]: I1217 08:40:34.107736    7366 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=2.107721976 podStartE2EDuration="2.107721976s" podCreationTimestamp="2025-12-17 08:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 08:40:33.681720293 +0000 UTC m=+100.231769751" watchObservedRunningTime="2025-12-17 08:40:34.107721976 +0000 UTC m=+100.657771429"
	Dec 17 08:40:34 functional-452472 kubelet[7366]: I1217 08:40:34.228276    7366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-968b6\" (UniqueName: \"kubernetes.io/projected/8efc52d2-890d-4c37-babf-ec218c8544df-kube-api-access-968b6\") pod \"mysql-7d7b65bc95-tv9dk\" (UID: \"8efc52d2-890d-4c37-babf-ec218c8544df\") " pod="default/mysql-7d7b65bc95-tv9dk"
	Dec 17 08:40:34 functional-452472 kubelet[7366]: E1217 08:40:34.573628    7366 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-452472" containerName="kube-scheduler"
	Dec 17 08:40:35 functional-452472 kubelet[7366]: E1217 08:40:35.574316    7366 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-452472" containerName="kube-apiserver"
	Dec 17 08:40:40 functional-452472 kubelet[7366]: I1217 08:40:40.068357    7366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfv2f\" (UniqueName: \"kubernetes.io/projected/2ed28554-e147-40f1-9c98-22fee95237ba-kube-api-access-pfv2f\") pod \"dashboard-metrics-scraper-5565989548-rm7wm\" (UID: \"2ed28554-e147-40f1-9c98-22fee95237ba\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-rm7wm"
	Dec 17 08:40:40 functional-452472 kubelet[7366]: I1217 08:40:40.068433    7366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2ed28554-e147-40f1-9c98-22fee95237ba-tmp-volume\") pod \"dashboard-metrics-scraper-5565989548-rm7wm\" (UID: \"2ed28554-e147-40f1-9c98-22fee95237ba\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-rm7wm"
	Dec 17 08:40:40 functional-452472 kubelet[7366]: I1217 08:40:40.068511    7366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c2460300-6622-441f-a2dc-e78dc1e9947f-tmp-volume\") pod \"kubernetes-dashboard-b84665fb8-4d69z\" (UID: \"c2460300-6622-441f-a2dc-e78dc1e9947f\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-4d69z"
	Dec 17 08:40:40 functional-452472 kubelet[7366]: I1217 08:40:40.068611    7366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p85h6\" (UniqueName: \"kubernetes.io/projected/c2460300-6622-441f-a2dc-e78dc1e9947f-kube-api-access-p85h6\") pod \"kubernetes-dashboard-b84665fb8-4d69z\" (UID: \"c2460300-6622-441f-a2dc-e78dc1e9947f\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-4d69z"
	
	
	==> storage-provisioner [1ae20198e715809c55c5aa1cc0b3381b754821cfe0b0c46ae53cae097d05216e] <==
	W1217 08:40:17.198400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:19.202114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:19.207004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:21.210384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:21.215347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:23.224814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:23.238471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:25.241962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:25.250245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:27.253822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:27.259230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:29.263370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:29.268384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:31.291061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:31.305979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:33.310596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:33.324674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:35.327256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:35.332252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:37.334922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:37.341340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:39.344860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:39.351607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:41.354481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:40:41.362397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [efc92cf4ba908fee23149245dbc756291b252ab89d06ae1cd620e3763c633f12] <==
	I1217 08:36:14.322109       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 08:36:14.333377       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 08:36:14.333426       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 08:36:14.335878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:14.342037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:36:14.342382       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 08:36:14.342958       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"376db418-cc6c-431a-aa36-733dd71501f9", APIVersion:"v1", ResourceVersion:"385", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-452472_c8a0e0d7-2363-48c0-9a83-6b882add5351 became leader
	I1217 08:36:14.343002       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-452472_c8a0e0d7-2363-48c0-9a83-6b882add5351!
	W1217 08:36:14.344850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:14.352919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:36:14.443397       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-452472_c8a0e0d7-2363-48c0-9a83-6b882add5351!
	W1217 08:36:16.355934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:16.362782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:18.365827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:18.371143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:20.374941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:20.382691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:22.386270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:22.395556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:24.399296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:24.404106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:26.407368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:26.412414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-452472 -n functional-452472
helpers_test.go:270: (dbg) Run:  kubectl --context functional-452472 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-5758569b79-92scj hello-node-connect-9f67c86d4-w5n8n mysql-7d7b65bc95-tv9dk dashboard-metrics-scraper-5565989548-rm7wm kubernetes-dashboard-b84665fb8-4d69z
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-452472 describe pod busybox-mount hello-node-5758569b79-92scj hello-node-connect-9f67c86d4-w5n8n mysql-7d7b65bc95-tv9dk dashboard-metrics-scraper-5565989548-rm7wm kubernetes-dashboard-b84665fb8-4d69z
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-452472 describe pod busybox-mount hello-node-5758569b79-92scj hello-node-connect-9f67c86d4-w5n8n mysql-7d7b65bc95-tv9dk dashboard-metrics-scraper-5565989548-rm7wm kubernetes-dashboard-b84665fb8-4d69z: exit status 1 (90.304079ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-452472/192.168.39.226
	Start Time:       Wed, 17 Dec 2025 08:39:20 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.15
	IPs:
	  IP:  10.244.0.15
	Containers:
	  mount-munger:
	    Container ID:  cri-o://1bafea12ac56a396c9e3c5b4985c434f4aa90ca96f2043cf0b42335c4ccfaee4
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 17 Dec 2025 08:40:19 +0000
	      Finished:     Wed, 17 Dec 2025 08:40:19 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vq2f7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-vq2f7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  82s   default-scheduler  Successfully assigned default/busybox-mount to functional-452472
	  Normal  Pulling    82s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     23s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.038s (58.962s including waiting). Image size: 4631262 bytes.
	  Normal  Created    23s   kubelet            Container created
	  Normal  Started    23s   kubelet            Container started
	
	
	Name:             hello-node-5758569b79-92scj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-452472/192.168.39.226
	Start Time:       Wed, 17 Dec 2025 08:39:16 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mmxsc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-mmxsc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  86s                default-scheduler  Successfully assigned default/hello-node-5758569b79-92scj to functional-452472
	  Warning  Failed     54s                kubelet            Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     54s                kubelet            Error: ErrImagePull
	  Normal   BackOff    54s                kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     54s                kubelet            Error: ImagePullBackOff
	  Normal   Pulling    39s (x2 over 86s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-9f67c86d4-w5n8n
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-452472/192.168.39.226
	Start Time:       Wed, 17 Dec 2025 08:39:17 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.14
	IPs:
	  IP:           10.244.0.14
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-26h8s (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-26h8s:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  85s                default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-w5n8n to functional-452472
	  Warning  Failed     24s                kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     24s                kubelet            Error: ErrImagePull
	  Normal   BackOff    23s                kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     23s                kubelet            Error: ImagePullBackOff
	  Normal   Pulling    12s (x2 over 85s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-7d7b65bc95-tv9dk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-452472/192.168.39.226
	Start Time:       Wed, 17 Dec 2025 08:40:34 +0000
	Labels:           app=mysql
	                  pod-template-hash=7d7b65bc95
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-7d7b65bc95
	Containers:
	  mysql:
	    Container ID:   
	    Image:          public.ecr.aws/docker/library/mysql:8.4
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-968b6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-968b6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  8s    default-scheduler  Successfully assigned default/mysql-7d7b65bc95-tv9dk to functional-452472
	  Normal  Pulling    8s    kubelet            Pulling image "public.ecr.aws/docker/library/mysql:8.4"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-rm7wm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-4d69z" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-452472 describe pod busybox-mount hello-node-5758569b79-92scj hello-node-connect-9f67c86d4-w5n8n mysql-7d7b65bc95-tv9dk dashboard-metrics-scraper-5565989548-rm7wm kubernetes-dashboard-b84665fb8-4d69z: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (3.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (602.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-452472 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-452472 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-w5n8n" [c9a81366-95a8-4150-a760-d6f402e6466f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-452472 -n functional-452472
functional_test.go:1645: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-17 08:49:17.29229914 +0000 UTC m=+2032.027496480
functional_test.go:1645: (dbg) Run:  kubectl --context functional-452472 describe po hello-node-connect-9f67c86d4-w5n8n -n default
functional_test.go:1645: (dbg) kubectl --context functional-452472 describe po hello-node-connect-9f67c86d4-w5n8n -n default:
Name:             hello-node-connect-9f67c86d4-w5n8n
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-452472/192.168.39.226
Start Time:       Wed, 17 Dec 2025 08:39:17 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.14
IPs:
IP:           10.244.0.14
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-26h8s (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-26h8s:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-w5n8n to functional-452472
Warning  Failed     8m59s                  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m56s (x4 over 8m59s)  kubelet            Error: ErrImagePull
Warning  Failed     2m56s (x3 over 7m37s)  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    100s (x11 over 8m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     100s (x11 over 8m58s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    87s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-452472 logs hello-node-connect-9f67c86d4-w5n8n -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-452472 logs hello-node-connect-9f67c86d4-w5n8n -n default: exit status 1 (66.384621ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-w5n8n" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-452472 logs hello-node-connect-9f67c86d4-w5n8n -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-452472 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-9f67c86d4-w5n8n
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-452472/192.168.39.226
Start Time:       Wed, 17 Dec 2025 08:39:17 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.14
IPs:
IP:           10.244.0.14
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-26h8s (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-26h8s:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-w5n8n to functional-452472
Warning  Failed     8m59s                  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m56s (x4 over 8m59s)  kubelet            Error: ErrImagePull
Warning  Failed     2m56s (x3 over 7m37s)  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    100s (x11 over 8m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     100s (x11 over 8m58s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    87s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-452472 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-452472 logs -l app=hello-node-connect: exit status 1 (65.376387ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-w5n8n" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-452472 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-452472 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.104.43.167
IPs:                      10.104.43.167
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31391/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-452472 -n functional-452472
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-452472 logs -n 25: (1.409378482s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-452472 image ls                                                                                                                                   │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image          │ functional-452472 image load --daemon kicbase/echo-server:functional-452472 --alsologtostderr                                                                │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image          │ functional-452472 image ls                                                                                                                                   │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image          │ functional-452472 image save kicbase/echo-server:functional-452472 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image          │ functional-452472 image rm kicbase/echo-server:functional-452472 --alsologtostderr                                                                           │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image          │ functional-452472 image ls                                                                                                                                   │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image          │ functional-452472 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image          │ functional-452472 image ls                                                                                                                                   │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image          │ functional-452472 image save --daemon kicbase/echo-server:functional-452472 --alsologtostderr                                                                │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ start          │ -p functional-452472 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                    │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │                     │
	│ start          │ -p functional-452472 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                              │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │                     │
	│ ssh            │ functional-452472 ssh sudo cat /etc/test/nested/copy/897277/hosts                                                                                            │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ start          │ -p functional-452472 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                    │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-452472 --alsologtostderr -v=1                                                                                               │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │                     │
	│ image          │ functional-452472 image ls --format short --alsologtostderr                                                                                                  │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image          │ functional-452472 image ls --format yaml --alsologtostderr                                                                                                   │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ ssh            │ functional-452472 ssh pgrep buildkitd                                                                                                                        │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │                     │
	│ image          │ functional-452472 image build -t localhost/my-image:functional-452472 testdata/build --alsologtostderr                                                       │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image          │ functional-452472 image ls                                                                                                                                   │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image          │ functional-452472 image ls --format json --alsologtostderr                                                                                                   │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ image          │ functional-452472 image ls --format table --alsologtostderr                                                                                                  │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ update-context │ functional-452472 update-context --alsologtostderr -v=2                                                                                                      │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ update-context │ functional-452472 update-context --alsologtostderr -v=2                                                                                                      │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ update-context │ functional-452472 update-context --alsologtostderr -v=2                                                                                                      │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:40 UTC │ 17 Dec 25 08:40 UTC │
	│ service        │ functional-452472 service list                                                                                                                               │ functional-452472 │ jenkins │ v1.37.0 │ 17 Dec 25 08:49 UTC │                     │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:40:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:40:33.874428  909302 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:40:33.874739  909302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:40:33.874750  909302 out.go:374] Setting ErrFile to fd 2...
	I1217 08:40:33.874755  909302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:40:33.875044  909302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 08:40:33.875537  909302 out.go:368] Setting JSON to false
	I1217 08:40:33.876605  909302 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12180,"bootTime":1765948654,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:40:33.876662  909302 start.go:143] virtualization: kvm guest
	I1217 08:40:33.878686  909302 out.go:179] * [functional-452472] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 08:40:33.880166  909302 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:40:33.880164  909302 notify.go:221] Checking for updates...
	I1217 08:40:33.881527  909302 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:40:33.882780  909302 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	I1217 08:40:33.884018  909302 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 08:40:33.885026  909302 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:40:33.886072  909302 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:40:33.887822  909302 config.go:182] Loaded profile config "functional-452472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:40:33.888499  909302 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:40:33.919391  909302 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1217 08:40:33.920611  909302 start.go:309] selected driver: kvm2
	I1217 08:40:33.920623  909302 start.go:927] validating driver "kvm2" against &{Name:functional-452472 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-452472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.226 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:40:33.920711  909302 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:40:33.922412  909302 out.go:203] 
	W1217 08:40:33.923375  909302 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 08:40:33.924380  909302 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.323847584Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab3cc0e9-9b5c-4a1c-9eee-89b9e4bd924e name=/runtime.v1.RuntimeService/Version
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.325314941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd5cd46a-cc33-43ec-84ef-5a5a6df5e44f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.326083802Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765961358326058087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:242163,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd5cd46a-cc33-43ec-84ef-5a5a6df5e44f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.327020053Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91097283-e3b1-4d60-ba5c-79542f078188 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.327118541Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91097283-e3b1-4d60-ba5c-79542f078188 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.327491519Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff1772eb8fe667aeab886ba7d737b0a1e5669284c09a0bbba4f4c2bceb7edb91,PodSandboxId:c526b60999cf84ed5b6e371f5d98b4f5ce5fa1450cd5effe418a930d49270c30,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765960915113959626,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-tv9dk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8efc52d2-890d-4c37-babf-ec218c8544df,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bedb4e1b318c719cfedaec8f80eb9bb307b355e8616bbcde9ca2b4fc3ced8ec0,PodSandboxId:e3660be0b8fbb99d1b99ebb1e270e6a59d8d4e81eb33571b0c50b9f3d52ab1e1,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765960833416433240,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cb0b118-a214-4183-94c8-217df6984d7e,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bafea12ac56a396c9e3c5b4985c434f4aa90ca96f2043cf0b42335c4ccfaee4,PodSandboxId:06869ac992b1b8111674755ef186965355b3cf5b8bd68a2ce6122312ee8e5839,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765960819787793061,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 096ac532-94e3-4c84-834b-a3749b9fc71c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f77421a9c08d419666dd2d4d752ec5754ea00f9553624bf9ea84af84ca1d62,PodSandboxId:0959434dec2e35f50499215c76be050fdacc1c2c181f2e36438c3f8067ac5ad1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765960737717717277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-tvjx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ccdb9e-3552-4d0c-aa79-209aa4bc384e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38390ce93a173e94bede6cf6421552defd4cc8b49d9d37f2ecf97671f75d6c2,PodSandboxId:64eb7bcbb251e4830e3d11c315e467160b0687269f60e88bd80fd4d006e0b482,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUN
NING,CreatedAt:1765960737650267658,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vlpbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5767dc2c-d2a7-40df-9980-cf0eb5099135,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a79860283a5d688bfe587f86250a792d36d69e5ac6ad4f9fb2f0d883b408cb,PodSandboxId:5af1f6c2a652c192a45c28158
27f125736e7052adfbb728b910354f9c511741e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1765960737135627122,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mlzkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006580b2-c5aa-46f2-a109-0b4e4293a31d,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae20198e715809c55c5aa1cc0b3381b754821cfe0b0c46ae53cae097d05216e,PodSandboxId:011e23e331c5febae5e463faffa959d50f8f082c6aa1d198ec0f0213b1
f4d179,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765960737140440279,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc28e214-79dc-4410-9e19-5e01dc8c177e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e314a4cd7ebcb67b1f912124f948ae99758af84ebad9776b3b3602546d454b,PodSandboxId:c7036e926edab5c4d154a6569f41063e98c69df84cbac6722506810b7601fd63,Metad
ata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1765960734512370039,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a64983b2a6e32426f2b85bf4025ab6,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab0b7679f8
7d69cdc32a244cea1ba39e59fdf91d2f53f6311b9b967026a439d,PodSandboxId:4b5459e69ba11517197d53fc6fffce57e140a1d288aee3172076464d8d348d83,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1765960734492361795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c66f6f2e75aeefdfac5925984824a19,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afae314e118b19ff9c06569c5a14f8f91ab46fec1e8130e020e703f720115437,PodSandboxId:22bb51b96af29060aee8fa8f0991b7aa52bfa9569e427f40f109496493cb7526,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1765960734486910864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f5a1f5e0c67075627ce1766978c877,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bbc4201b4702bae790066874d19fed5d287ad766c42b9fa697cc27070c56ab9,PodSandboxId:18a61a19556c83b9cc2d82630bc4a99969d429742da752a4b2f76392ba842128,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1765960734446267806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d663db3988802dd0b7f3a700e2703644,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPo
rt\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb75140ee1acd6e5714883d02293293abb7231572ea9276571847bb58080c36e,PodSandboxId:69fa824c620eb19adc3f011c88a7a2667aaf4b0454d144f44b6057ca46b8c0e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1765960693659832385,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mlzkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006580b2-c5aa-46f2-a109-0b4e4293a31d,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e85c94f90b224595b7f5063268abd750dfae8b503a1bd967141af459bce0472d,PodSandboxId:71812c39917583f284a2a4ed5532a76e8241ef9f313532cbbb5102c9f47a0aa6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765960693676740655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vlpbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5767dc2c-d2a7-40df-9980-cf0eb5099135,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kuber
netes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41c46ed24892773ba743ef8a2e7b0127cf5a0e070f13fa09b5e982909263045,PodSandboxId:f2694d814f15a1423255f88ffab3a14679cad607b610b3d46cae4f97aba353e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,St
ate:CONTAINER_EXITED,CreatedAt:1765960693644322289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-tvjx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ccdb9e-3552-4d0c-aa79-209aa4bc384e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24a8c206284f995e1beab427472536e2d5ac218b538808a3a7eb1194ce0849f8,PodSandboxId:715e5eeee
18dfc4b463d53d93a80cf8ab421378058d8c21e32071b228d5553f3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1765960691012318916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c66f6f2e75aeefdfac5925984824a19,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56fd93bc6d
042ffc6d5e70b53079a47da3495248a5349c01d8f3a3ac359bf531,PodSandboxId:afad058a661b32de13f3638ac69bece90b9fa781f8c8dc6edb25b23ddb026fc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1765960691013585704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d663db3988802dd0b7f3a700e2703644,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aaa296c1ec268580ef35dfb6084b2d11765a0551fd3025f7cb535ed69d8c036,PodSandboxId:b8a40898664357842f4a01c490a316886b9502b89edcb571e646fed65e16d0b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1765960690993886417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a64983b2a6e32426f2b85bf4025ab6,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efc92cf4ba908fee23149245dbc756291b252ab89d06ae1cd620e3763c633f12,PodSandboxId:3a70ccdc3e38a0bd793053d58cf9bef42ae122e408cbe7c66674d9eadbdff820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765960574188270257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc28e214-79dc-4410-9e19-5e01dc8c177e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91097283-e3b1-4d60-ba5c-79542f078188 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.357145913Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=be3d533e-047f-4bfd-be91-009a4c45f4f5 name=/runtime.v1.RuntimeService/Version
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.357252388Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=be3d533e-047f-4bfd-be91-009a4c45f4f5 name=/runtime.v1.RuntimeService/Version
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.358370267Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8e9e153-6b30-40ac-826c-90d0b21e73d3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.359844070Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765961358359820466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:242163,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8e9e153-6b30-40ac-826c-90d0b21e73d3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.360981754Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a949aa42-c5e5-475d-9662-e294d628227b name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.361102378Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a949aa42-c5e5-475d-9662-e294d628227b name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.361492357Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff1772eb8fe667aeab886ba7d737b0a1e5669284c09a0bbba4f4c2bceb7edb91,PodSandboxId:c526b60999cf84ed5b6e371f5d98b4f5ce5fa1450cd5effe418a930d49270c30,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765960915113959626,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-tv9dk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8efc52d2-890d-4c37-babf-ec218c8544df,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bedb4e1b318c719cfedaec8f80eb9bb307b355e8616bbcde9ca2b4fc3ced8ec0,PodSandboxId:e3660be0b8fbb99d1b99ebb1e270e6a59d8d4e81eb33571b0c50b9f3d52ab1e1,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765960833416433240,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cb0b118-a214-4183-94c8-217df6984d7e,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bafea12ac56a396c9e3c5b4985c434f4aa90ca96f2043cf0b42335c4ccfaee4,PodSandboxId:06869ac992b1b8111674755ef186965355b3cf5b8bd68a2ce6122312ee8e5839,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765960819787793061,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 096ac532-94e3-4c84-834b-a3749b9fc71c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f77421a9c08d419666dd2d4d752ec5754ea00f9553624bf9ea84af84ca1d62,PodSandboxId:0959434dec2e35f50499215c76be050fdacc1c2c181f2e36438c3f8067ac5ad1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765960737717717277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-tvjx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ccdb9e-3552-4d0c-aa79-209aa4bc384e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38390ce93a173e94bede6cf6421552defd4cc8b49d9d37f2ecf97671f75d6c2,PodSandboxId:64eb7bcbb251e4830e3d11c315e467160b0687269f60e88bd80fd4d006e0b482,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUN
NING,CreatedAt:1765960737650267658,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vlpbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5767dc2c-d2a7-40df-9980-cf0eb5099135,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a79860283a5d688bfe587f86250a792d36d69e5ac6ad4f9fb2f0d883b408cb,PodSandboxId:5af1f6c2a652c192a45c28158
27f125736e7052adfbb728b910354f9c511741e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1765960737135627122,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mlzkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006580b2-c5aa-46f2-a109-0b4e4293a31d,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae20198e715809c55c5aa1cc0b3381b754821cfe0b0c46ae53cae097d05216e,PodSandboxId:011e23e331c5febae5e463faffa959d50f8f082c6aa1d198ec0f0213b1
f4d179,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765960737140440279,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc28e214-79dc-4410-9e19-5e01dc8c177e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e314a4cd7ebcb67b1f912124f948ae99758af84ebad9776b3b3602546d454b,PodSandboxId:c7036e926edab5c4d154a6569f41063e98c69df84cbac6722506810b7601fd63,Metad
ata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1765960734512370039,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a64983b2a6e32426f2b85bf4025ab6,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab0b7679f8
7d69cdc32a244cea1ba39e59fdf91d2f53f6311b9b967026a439d,PodSandboxId:4b5459e69ba11517197d53fc6fffce57e140a1d288aee3172076464d8d348d83,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1765960734492361795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c66f6f2e75aeefdfac5925984824a19,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afae314e118b19ff9c06569c5a14f8f91ab46fec1e8130e020e703f720115437,PodSandboxId:22bb51b96af29060aee8fa8f0991b7aa52bfa9569e427f40f109496493cb7526,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1765960734486910864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f5a1f5e0c67075627ce1766978c877,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bbc4201b4702bae790066874d19fed5d287ad766c42b9fa697cc27070c56ab9,PodSandboxId:18a61a19556c83b9cc2d82630bc4a99969d429742da752a4b2f76392ba842128,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1765960734446267806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d663db3988802dd0b7f3a700e2703644,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPo
rt\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb75140ee1acd6e5714883d02293293abb7231572ea9276571847bb58080c36e,PodSandboxId:69fa824c620eb19adc3f011c88a7a2667aaf4b0454d144f44b6057ca46b8c0e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1765960693659832385,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mlzkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006580b2-c5aa-46f2-a109-0b4e4293a31d,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e85c94f90b224595b7f5063268abd750dfae8b503a1bd967141af459bce0472d,PodSandboxId:71812c39917583f284a2a4ed5532a76e8241ef9f313532cbbb5102c9f47a0aa6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765960693676740655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vlpbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5767dc2c-d2a7-40df-9980-cf0eb5099135,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kuber
netes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41c46ed24892773ba743ef8a2e7b0127cf5a0e070f13fa09b5e982909263045,PodSandboxId:f2694d814f15a1423255f88ffab3a14679cad607b610b3d46cae4f97aba353e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,St
ate:CONTAINER_EXITED,CreatedAt:1765960693644322289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-tvjx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ccdb9e-3552-4d0c-aa79-209aa4bc384e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24a8c206284f995e1beab427472536e2d5ac218b538808a3a7eb1194ce0849f8,PodSandboxId:715e5eeee
18dfc4b463d53d93a80cf8ab421378058d8c21e32071b228d5553f3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1765960691012318916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c66f6f2e75aeefdfac5925984824a19,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56fd93bc6d
042ffc6d5e70b53079a47da3495248a5349c01d8f3a3ac359bf531,PodSandboxId:afad058a661b32de13f3638ac69bece90b9fa781f8c8dc6edb25b23ddb026fc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1765960691013585704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d663db3988802dd0b7f3a700e2703644,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aaa296c1ec268580ef35dfb6084b2d11765a0551fd3025f7cb535ed69d8c036,PodSandboxId:b8a40898664357842f4a01c490a316886b9502b89edcb571e646fed65e16d0b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1765960690993886417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a64983b2a6e32426f2b85bf4025ab6,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efc92cf4ba908fee23149245dbc756291b252ab89d06ae1cd620e3763c633f12,PodSandboxId:3a70ccdc3e38a0bd793053d58cf9bef42ae122e408cbe7c66674d9eadbdff820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765960574188270257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc28e214-79dc-4410-9e19-5e01dc8c177e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a949aa42-c5e5-475d-9662-e294d628227b name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.390956225Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=956f9e10-0485-452e-81aa-507758bc39f8 name=/runtime.v1.RuntimeService/Version
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.391517147Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=956f9e10-0485-452e-81aa-507758bc39f8 name=/runtime.v1.RuntimeService/Version
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.392816097Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48c47513-7a09-4e3c-8748-d85710ff2409 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.392850937Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=6a3edd94-1a70-431e-ba0a-253f40200172 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.393655184Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:55b780d79843eabdb15205ba27ea8df49d01f8744086ca3c4629fd3a1939f89d,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-b84665fb8-4d69z,Uid:c2460300-6622-441f-a2dc-e78dc1e9947f,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765960840312612995,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-b84665fb8-4d69z,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: c2460300-6622-441f-a2dc-e78dc1e9947f,k8s-app: kubernetes-dashboard,pod-template-hash: b84665fb8,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:40:39.983226266Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2837d845865d6ebeae4cccfaa8ed8b87f6763dc2480bc8463eea3812f66e5df1,Metadata:&PodSandboxMetadata{Name:da
shboard-metrics-scraper-5565989548-rm7wm,Uid:2ed28554-e147-40f1-9c98-22fee95237ba,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765960840301490601,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-5565989548-rm7wm,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 2ed28554-e147-40f1-9c98-22fee95237ba,k8s-app: dashboard-metrics-scraper,pod-template-hash: 5565989548,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:40:39.980350616Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:c526b60999cf84ed5b6e371f5d98b4f5ce5fa1450cd5effe418a930d49270c30,Metadata:&PodSandboxMetadata{Name:mysql-7d7b65bc95-tv9dk,Uid:8efc52d2-890d-4c37-babf-ec218c8544df,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765960834432137115,Labels:map[string]string{app: mysql,io.kubernetes.container.name: POD,io.kubernetes.pod.n
ame: mysql-7d7b65bc95-tv9dk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8efc52d2-890d-4c37-babf-ec218c8544df,pod-template-hash: 7d7b65bc95,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:40:34.107428340Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e3660be0b8fbb99d1b99ebb1e270e6a59d8d4e81eb33571b0c50b9f3d52ab1e1,Metadata:&PodSandboxMetadata{Name:sp-pod,Uid:0cb0b118-a214-4183-94c8-217df6984d7e,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765960833153897774,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cb0b118-a214-4183-94c8-217df6984d7e,test: storage-provisioner,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"test\":\"storage-provisioner\"},\"name\":\"sp-pod\",\"namespace\":\"default\"},\"spec\":{\"containers\
":[{\"image\":\"public.ecr.aws/nginx/nginx:alpine\",\"name\":\"myfrontend\",\"volumeMounts\":[{\"mountPath\":\"/tmp/mount\",\"name\":\"mypd\"}]}],\"volumes\":[{\"name\":\"mypd\",\"persistentVolumeClaim\":{\"claimName\":\"myclaim\"}}]}}\n,kubernetes.io/config.seen: 2025-12-17T08:40:32.823522600Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:06869ac992b1b8111674755ef186965355b3cf5b8bd68a2ce6122312ee8e5839,Metadata:&PodSandboxMetadata{Name:busybox-mount,Uid:096ac532-94e3-4c84-834b-a3749b9fc71c,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1765960760608244802,Labels:map[string]string{integration-test: busybox-mount,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 096ac532-94e3-4c84-834b-a3749b9fc71c,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:39:20.290935562Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b7920b5188cca89ee5890312c4a0d23aeb226af
d5a6969932c18782d2a646a5c,Metadata:&PodSandboxMetadata{Name:hello-node-connect-9f67c86d4-w5n8n,Uid:c9a81366-95a8-4150-a760-d6f402e6466f,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765960757324813554,Labels:map[string]string{app: hello-node-connect,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-connect-9f67c86d4-w5n8n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c9a81366-95a8-4150-a760-d6f402e6466f,pod-template-hash: 9f67c86d4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:39:17.000019421Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ca037cee5178a5c9980d837cb7eb5edcb83f201667eaa06ef60cfbbb862956b0,Metadata:&PodSandboxMetadata{Name:hello-node-5758569b79-92scj,Uid:764ca098-86c6-4aef-8662-0d99cec3f081,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765960756665738403,Labels:map[string]string{app: hello-node,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-5758569b79-92scj,io.kube
rnetes.pod.namespace: default,io.kubernetes.pod.uid: 764ca098-86c6-4aef-8662-0d99cec3f081,pod-template-hash: 5758569b79,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:39:16.340234272Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0959434dec2e35f50499215c76be050fdacc1c2c181f2e36438c3f8067ac5ad1,Metadata:&PodSandboxMetadata{Name:coredns-7d764666f9-tvjx6,Uid:44ccdb9e-3552-4d0c-aa79-209aa4bc384e,Namespace:kube-system,Attempt:4,},State:SANDBOX_READY,CreatedAt:1765960737035335940,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7d764666f9-tvjx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ccdb9e-3552-4d0c-aa79-209aa4bc384e,k8s-app: kube-dns,pod-template-hash: 7d764666f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:38:56.543105802Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:64eb7bcbb251e4830e3d11c315e467160b0687269f60e88bd80fd4d006e0b482,Metadata:&PodSand
boxMetadata{Name:coredns-7d764666f9-vlpbt,Uid:5767dc2c-d2a7-40df-9980-cf0eb5099135,Namespace:kube-system,Attempt:4,},State:SANDBOX_READY,CreatedAt:1765960737031800559,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7d764666f9-vlpbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5767dc2c-d2a7-40df-9980-cf0eb5099135,k8s-app: kube-dns,pod-template-hash: 7d764666f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:38:56.543106948Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:011e23e331c5febae5e463faffa959d50f8f082c6aa1d198ec0f0213b1f4d179,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:cc28e214-79dc-4410-9e19-5e01dc8c177e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1765960736881930576,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: cc28e214-79dc-4410-9e19-5e01dc8c177e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-17T08:38:56.543104586Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5af1f6c2a652c192a45c2815827f125736e7052adfbb728b910354f9c511741e,Me
tadata:&PodSandboxMetadata{Name:kube-proxy-mlzkt,Uid:006580b2-c5aa-46f2-a109-0b4e4293a31d,Namespace:kube-system,Attempt:4,},State:SANDBOX_READY,CreatedAt:1765960736873714425,Labels:map[string]string{controller-revision-hash: 57c97698cf,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mlzkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006580b2-c5aa-46f2-a109-0b4e4293a31d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:38:56.543098654Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:22bb51b96af29060aee8fa8f0991b7aa52bfa9569e427f40f109496493cb7526,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-452472,Uid:c9f5a1f5e0c67075627ce1766978c877,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765960734282649060,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-452472,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f5a1f5e0c67075627ce1766978c877,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.226:8441,kubernetes.io/config.hash: c9f5a1f5e0c67075627ce1766978c877,kubernetes.io/config.seen: 2025-12-17T08:38:53.540681565Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c7036e926edab5c4d154a6569f41063e98c69df84cbac6722506810b7601fd63,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-452472,Uid:59a64983b2a6e32426f2b85bf4025ab6,Namespace:kube-system,Attempt:4,},State:SANDBOX_READY,CreatedAt:1765960734238986165,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a64983b2a6e32426f2b85bf4025ab6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 59a64983b2a6e3242
6f2b85bf4025ab6,kubernetes.io/config.seen: 2025-12-17T08:38:53.540682733Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4b5459e69ba11517197d53fc6fffce57e140a1d288aee3172076464d8d348d83,Metadata:&PodSandboxMetadata{Name:etcd-functional-452472,Uid:5c66f6f2e75aeefdfac5925984824a19,Namespace:kube-system,Attempt:4,},State:SANDBOX_READY,CreatedAt:1765960734214706203,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c66f6f2e75aeefdfac5925984824a19,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.226:2379,kubernetes.io/config.hash: 5c66f6f2e75aeefdfac5925984824a19,kubernetes.io/config.seen: 2025-12-17T08:38:53.540680443Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:18a61a19556c83b9cc2d82630bc4a99969d429742da752a4b2f76392ba842128,Metadata:&PodSandboxMetadata{Name:kub
e-scheduler-functional-452472,Uid:d663db3988802dd0b7f3a700e2703644,Namespace:kube-system,Attempt:4,},State:SANDBOX_READY,CreatedAt:1765960734204576322,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d663db3988802dd0b7f3a700e2703644,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d663db3988802dd0b7f3a700e2703644,kubernetes.io/config.seen: 2025-12-17T08:38:53.540677183Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:71812c39917583f284a2a4ed5532a76e8241ef9f313532cbbb5102c9f47a0aa6,Metadata:&PodSandboxMetadata{Name:coredns-7d764666f9-vlpbt,Uid:5767dc2c-d2a7-40df-9980-cf0eb5099135,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765960688268491397,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7d764666f9-vlpbt,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 5767dc2c-d2a7-40df-9980-cf0eb5099135,k8s-app: kube-dns,pod-template-hash: 7d764666f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:36:12.564143838Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f2694d814f15a1423255f88ffab3a14679cad607b610b3d46cae4f97aba353e4,Metadata:&PodSandboxMetadata{Name:coredns-7d764666f9-tvjx6,Uid:44ccdb9e-3552-4d0c-aa79-209aa4bc384e,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765960688260429234,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7d764666f9-tvjx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ccdb9e-3552-4d0c-aa79-209aa4bc384e,k8s-app: kube-dns,pod-template-hash: 7d764666f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:36:12.554884463Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:69fa824c620eb19adc3f011c88a7a2667aaf4b0454d144f44b6057ca46b8c0e4,Metadata:&PodSandbox
Metadata{Name:kube-proxy-mlzkt,Uid:006580b2-c5aa-46f2-a109-0b4e4293a31d,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765960688099022774,Labels:map[string]string{controller-revision-hash: 57c97698cf,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mlzkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006580b2-c5aa-46f2-a109-0b4e4293a31d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T08:36:12.486373365Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:715e5eeee18dfc4b463d53d93a80cf8ab421378058d8c21e32071b228d5553f3,Metadata:&PodSandboxMetadata{Name:etcd-functional-452472,Uid:5c66f6f2e75aeefdfac5925984824a19,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765960688029765112,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 5c66f6f2e75aeefdfac5925984824a19,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.226:2379,kubernetes.io/config.hash: 5c66f6f2e75aeefdfac5925984824a19,kubernetes.io/config.seen: 2025-12-17T08:36:06.992561749Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b8a40898664357842f4a01c490a316886b9502b89edcb571e646fed65e16d0b9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-452472,Uid:59a64983b2a6e32426f2b85bf4025ab6,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765960687939853091,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a64983b2a6e32426f2b85bf4025ab6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 59a64983b2a6e32426f2b85bf4025ab6,kubernetes.io/config.seen: 202
5-12-17T08:36:06.992566103Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:afad058a661b32de13f3638ac69bece90b9fa781f8c8dc6edb25b23ddb026fc0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-452472,Uid:d663db3988802dd0b7f3a700e2703644,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765960687911150161,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d663db3988802dd0b7f3a700e2703644,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d663db3988802dd0b7f3a700e2703644,kubernetes.io/config.seen: 2025-12-17T08:36:06.992566922Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3a70ccdc3e38a0bd793053d58cf9bef42ae122e408cbe7c66674d9eadbdff820,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:cc28e214-79dc-4410-9e19-5e01dc8c177e,Namespace:kube-system,Attempt:0,}
,State:SANDBOX_NOTREADY,CreatedAt:1765960573380337012,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc28e214-79dc-4410-9e19-5e01dc8c177e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path
\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-17T08:36:13.051075791Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=6a3edd94-1a70-431e-ba0a-253f40200172 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.394081032Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765961358394059755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:242163,},InodesUsed:&UInt64Value{Value:113,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48c47513-7a09-4e3c-8748-d85710ff2409 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.395089677Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee3c72f9-4ac5-49c2-9cc1-009d49d76cfd name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.395201105Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee3c72f9-4ac5-49c2-9cc1-009d49d76cfd name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.395541136Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff1772eb8fe667aeab886ba7d737b0a1e5669284c09a0bbba4f4c2bceb7edb91,PodSandboxId:c526b60999cf84ed5b6e371f5d98b4f5ce5fa1450cd5effe418a930d49270c30,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765960915113959626,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-tv9dk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8efc52d2-890d-4c37-babf-ec218c8544df,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bedb4e1b318c719cfedaec8f80eb9bb307b355e8616bbcde9ca2b4fc3ced8ec0,PodSandboxId:e3660be0b8fbb99d1b99ebb1e270e6a59d8d4e81eb33571b0c50b9f3d52ab1e1,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765960833416433240,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cb0b118-a214-4183-94c8-217df6984d7e,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bafea12ac56a396c9e3c5b4985c434f4aa90ca96f2043cf0b42335c4ccfaee4,PodSandboxId:06869ac992b1b8111674755ef186965355b3cf5b8bd68a2ce6122312ee8e5839,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765960819787793061,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 096ac532-94e3-4c84-834b-a3749b9fc71c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f77421a9c08d419666dd2d4d752ec5754ea00f9553624bf9ea84af84ca1d62,PodSandboxId:0959434dec2e35f50499215c76be050fdacc1c2c181f2e36438c3f8067ac5ad1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765960737717717277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-tvjx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ccdb9e-3552-4d0c-aa79-209aa4bc384e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38390ce93a173e94bede6cf6421552defd4cc8b49d9d37f2ecf97671f75d6c2,PodSandboxId:64eb7bcbb251e4830e3d11c315e467160b0687269f60e88bd80fd4d006e0b482,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUN
NING,CreatedAt:1765960737650267658,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vlpbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5767dc2c-d2a7-40df-9980-cf0eb5099135,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a79860283a5d688bfe587f86250a792d36d69e5ac6ad4f9fb2f0d883b408cb,PodSandboxId:5af1f6c2a652c192a45c28158
27f125736e7052adfbb728b910354f9c511741e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1765960737135627122,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mlzkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006580b2-c5aa-46f2-a109-0b4e4293a31d,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae20198e715809c55c5aa1cc0b3381b754821cfe0b0c46ae53cae097d05216e,PodSandboxId:011e23e331c5febae5e463faffa959d50f8f082c6aa1d198ec0f0213b1
f4d179,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765960737140440279,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc28e214-79dc-4410-9e19-5e01dc8c177e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e314a4cd7ebcb67b1f912124f948ae99758af84ebad9776b3b3602546d454b,PodSandboxId:c7036e926edab5c4d154a6569f41063e98c69df84cbac6722506810b7601fd63,Metad
ata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1765960734512370039,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a64983b2a6e32426f2b85bf4025ab6,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab0b7679f8
7d69cdc32a244cea1ba39e59fdf91d2f53f6311b9b967026a439d,PodSandboxId:4b5459e69ba11517197d53fc6fffce57e140a1d288aee3172076464d8d348d83,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1765960734492361795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c66f6f2e75aeefdfac5925984824a19,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afae314e118b19ff9c06569c5a14f8f91ab46fec1e8130e020e703f720115437,PodSandboxId:22bb51b96af29060aee8fa8f0991b7aa52bfa9569e427f40f109496493cb7526,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1765960734486910864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f5a1f5e0c67075627ce1766978c877,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bbc4201b4702bae790066874d19fed5d287ad766c42b9fa697cc27070c56ab9,PodSandboxId:18a61a19556c83b9cc2d82630bc4a99969d429742da752a4b2f76392ba842128,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1765960734446267806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d663db3988802dd0b7f3a700e2703644,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPo
rt\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb75140ee1acd6e5714883d02293293abb7231572ea9276571847bb58080c36e,PodSandboxId:69fa824c620eb19adc3f011c88a7a2667aaf4b0454d144f44b6057ca46b8c0e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1765960693659832385,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mlzkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006580b2-c5aa-46f2-a109-0b4e4293a31d,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e85c94f90b224595b7f5063268abd750dfae8b503a1bd967141af459bce0472d,PodSandboxId:71812c39917583f284a2a4ed5532a76e8241ef9f313532cbbb5102c9f47a0aa6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765960693676740655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vlpbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5767dc2c-d2a7-40df-9980-cf0eb5099135,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kuber
netes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41c46ed24892773ba743ef8a2e7b0127cf5a0e070f13fa09b5e982909263045,PodSandboxId:f2694d814f15a1423255f88ffab3a14679cad607b610b3d46cae4f97aba353e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,St
ate:CONTAINER_EXITED,CreatedAt:1765960693644322289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-tvjx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ccdb9e-3552-4d0c-aa79-209aa4bc384e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24a8c206284f995e1beab427472536e2d5ac218b538808a3a7eb1194ce0849f8,PodSandboxId:715e5eeee
18dfc4b463d53d93a80cf8ab421378058d8c21e32071b228d5553f3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1765960691012318916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c66f6f2e75aeefdfac5925984824a19,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56fd93bc6d
042ffc6d5e70b53079a47da3495248a5349c01d8f3a3ac359bf531,PodSandboxId:afad058a661b32de13f3638ac69bece90b9fa781f8c8dc6edb25b23ddb026fc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1765960691013585704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d663db3988802dd0b7f3a700e2703644,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aaa296c1ec268580ef35dfb6084b2d11765a0551fd3025f7cb535ed69d8c036,PodSandboxId:b8a40898664357842f4a01c490a316886b9502b89edcb571e646fed65e16d0b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1765960690993886417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a64983b2a6e32426f2b85bf4025ab6,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efc92cf4ba908fee23149245dbc756291b252ab89d06ae1cd620e3763c633f12,PodSandboxId:3a70ccdc3e38a0bd793053d58cf9bef42ae122e408cbe7c66674d9eadbdff820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765960574188270257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc28e214-79dc-4410-9e19-5e01dc8c177e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee3c72f9-4ac5-49c2-9cc1-009d49d76cfd name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.396386997Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65c7679f-1a24-40f3-b200-48f47a75b318 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.396553216Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65c7679f-1a24-40f3-b200-48f47a75b318 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 08:49:18 functional-452472 crio[6904]: time="2025-12-17 08:49:18.397476973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff1772eb8fe667aeab886ba7d737b0a1e5669284c09a0bbba4f4c2bceb7edb91,PodSandboxId:c526b60999cf84ed5b6e371f5d98b4f5ce5fa1450cd5effe418a930d49270c30,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438,State:CONTAINER_RUNNING,CreatedAt:1765960915113959626,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-7d7b65bc95-tv9dk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8efc52d2-890d-4c37-babf-ec218c8544df,},Annotations:map[string]string{io.kubernetes.container.hash: 60abff75,io.kubernetes.container.ports: [{\"name\":\
"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bedb4e1b318c719cfedaec8f80eb9bb307b355e8616bbcde9ca2b4fc3ced8ec0,PodSandboxId:e3660be0b8fbb99d1b99ebb1e270e6a59d8d4e81eb33571b0c50b9f3d52ab1e1,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765960833416433240,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cb0b118-a214-4183-94c8-217df6984d7e,},Annotations:map[string]string{io.kubernetes.container.hash: 8
389bcbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bafea12ac56a396c9e3c5b4985c434f4aa90ca96f2043cf0b42335c4ccfaee4,PodSandboxId:06869ac992b1b8111674755ef186965355b3cf5b8bd68a2ce6122312ee8e5839,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765960819787793061,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 096ac532-94e3-4c84-834b-a3749b9fc71c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb
284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89f77421a9c08d419666dd2d4d752ec5754ea00f9553624bf9ea84af84ca1d62,PodSandboxId:0959434dec2e35f50499215c76be050fdacc1c2c181f2e36438c3f8067ac5ad1,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765960737717717277,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-tvjx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ccdb9e-3552-4d0c-aa79-209aa4bc384e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38390ce93a173e94bede6cf6421552defd4cc8b49d9d37f2ecf97671f75d6c2,PodSandboxId:64eb7bcbb251e4830e3d11c315e467160b0687269f60e88bd80fd4d006e0b482,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUN
NING,CreatedAt:1765960737650267658,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vlpbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5767dc2c-d2a7-40df-9980-cf0eb5099135,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a79860283a5d688bfe587f86250a792d36d69e5ac6ad4f9fb2f0d883b408cb,PodSandboxId:5af1f6c2a652c192a45c28158
27f125736e7052adfbb728b910354f9c511741e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_RUNNING,CreatedAt:1765960737135627122,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mlzkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006580b2-c5aa-46f2-a109-0b4e4293a31d,},Annotations:map[string]string{io.kubernetes.container.hash: d6e0e1a9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae20198e715809c55c5aa1cc0b3381b754821cfe0b0c46ae53cae097d05216e,PodSandboxId:011e23e331c5febae5e463faffa959d50f8f082c6aa1d198ec0f0213b1
f4d179,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765960737140440279,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc28e214-79dc-4410-9e19-5e01dc8c177e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e314a4cd7ebcb67b1f912124f948ae99758af84ebad9776b3b3602546d454b,PodSandboxId:c7036e926edab5c4d154a6569f41063e98c69df84cbac6722506810b7601fd63,Metad
ata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_RUNNING,CreatedAt:1765960734512370039,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a64983b2a6e32426f2b85bf4025ab6,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab0b7679f8
7d69cdc32a244cea1ba39e59fdf91d2f53f6311b9b967026a439d,PodSandboxId:4b5459e69ba11517197d53fc6fffce57e140a1d288aee3172076464d8d348d83,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_RUNNING,CreatedAt:1765960734492361795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c66f6f2e75aeefdfac5925984824a19,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afae314e118b19ff9c06569c5a14f8f91ab46fec1e8130e020e703f720115437,PodSandboxId:22bb51b96af29060aee8fa8f0991b7aa52bfa9569e427f40f109496493cb7526,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce,State:CONTAINER_RUNNING,CreatedAt:1765960734486910864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9f5a1f5e0c67075627ce1766978c877,},Annotations:map[string]string{io.kubernetes.container.hash: 3a762ed7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bbc4201b4702bae790066874d19fed5d287ad766c42b9fa697cc27070c56ab9,PodSandboxId:18a61a19556c83b9cc2d82630bc4a99969d429742da752a4b2f76392ba842128,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_RUNNING,CreatedAt:1765960734446267806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d663db3988802dd0b7f3a700e2703644,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPo
rt\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb75140ee1acd6e5714883d02293293abb7231572ea9276571847bb58080c36e,PodSandboxId:69fa824c620eb19adc3f011c88a7a2667aaf4b0454d144f44b6057ca46b8c0e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a,State:CONTAINER_EXITED,CreatedAt:1765960693659832385,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mlzkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006580b2-c5aa-46f2-a109-0b4e4293a31d,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d6e0e1a9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e85c94f90b224595b7f5063268abd750dfae8b503a1bd967141af459bce0472d,PodSandboxId:71812c39917583f284a2a4ed5532a76e8241ef9f313532cbbb5102c9f47a0aa6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765960693676740655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-vlpbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5767dc2c-d2a7-40df-9980-cf0eb5099135,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kuber
netes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41c46ed24892773ba743ef8a2e7b0127cf5a0e070f13fa09b5e982909263045,PodSandboxId:f2694d814f15a1423255f88ffab3a14679cad607b610b3d46cae4f97aba353e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,St
ate:CONTAINER_EXITED,CreatedAt:1765960693644322289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-tvjx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ccdb9e-3552-4d0c-aa79-209aa4bc384e,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24a8c206284f995e1beab427472536e2d5ac218b538808a3a7eb1194ce0849f8,PodSandboxId:715e5eeee
18dfc4b463d53d93a80cf8ab421378058d8c21e32071b228d5553f3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2,State:CONTAINER_EXITED,CreatedAt:1765960691012318916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c66f6f2e75aeefdfac5925984824a19,},Annotations:map[string]string{io.kubernetes.container.hash: d48420cb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56fd93bc6d
042ffc6d5e70b53079a47da3495248a5349c01d8f3a3ac359bf531,PodSandboxId:afad058a661b32de13f3638ac69bece90b9fa781f8c8dc6edb25b23ddb026fc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc,State:CONTAINER_EXITED,CreatedAt:1765960691013585704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d663db3988802dd0b7f3a700e2703644,},Annotations:map[string]string{io.kubernetes.container.hash: 387426db,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aaa296c1ec268580ef35dfb6084b2d11765a0551fd3025f7cb535ed69d8c036,PodSandboxId:b8a40898664357842f4a01c490a316886b9502b89edcb571e646fed65e16d0b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614,State:CONTAINER_EXITED,CreatedAt:1765960690993886417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-452472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59a64983b2a6e32426f2b85bf4025ab6,},Annotations:map[string]string{io.kubernetes.container.hash: b01a1ee5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efc92cf4ba908fee23149245dbc756291b252ab89d06ae1cd620e3763c633f12,PodSandboxId:3a70ccdc3e38a0bd793053d58cf9bef42ae122e408cbe7c66674d9eadbdff820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765960574188270257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc28e214-79dc-4410-9e19-5e01dc8c177e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65c7679f-1a24-40f3-b200-48f47a75b318 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                         CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ff1772eb8fe66       public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036   7 minutes ago       Running             mysql                     0                   c526b60999cf8       mysql-7d7b65bc95-tv9dk                      default
	bedb4e1b318c7       a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c                                              8 minutes ago       Running             myfrontend                0                   e3660be0b8fbb       sp-pod                                      default
	1bafea12ac56a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e           8 minutes ago       Exited              mount-munger              0                   06869ac992b1b       busybox-mount                               default
	89f77421a9c08       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                              10 minutes ago      Running             coredns                   3                   0959434dec2e3       coredns-7d764666f9-tvjx6                    kube-system
	a38390ce93a17       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                              10 minutes ago      Running             coredns                   3                   64eb7bcbb251e       coredns-7d764666f9-vlpbt                    kube-system
	1ae20198e7158       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              10 minutes ago      Running             storage-provisioner       2                   011e23e331c5f       storage-provisioner                         kube-system
	66a79860283a5       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                              10 minutes ago      Running             kube-proxy                3                   5af1f6c2a652c       kube-proxy-mlzkt                            kube-system
	d8e314a4cd7eb       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                              10 minutes ago      Running             kube-controller-manager   3                   c7036e926edab       kube-controller-manager-functional-452472   kube-system
	eab0b7679f87d       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                              10 minutes ago      Running             etcd                      3                   4b5459e69ba11       etcd-functional-452472                      kube-system
	afae314e118b1       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                              10 minutes ago      Running             kube-apiserver            0                   22bb51b96af29       kube-apiserver-functional-452472            kube-system
	4bbc4201b4702       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                              10 minutes ago      Running             kube-scheduler            3                   18a61a19556c8       kube-scheduler-functional-452472            kube-system
	e85c94f90b224       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                              11 minutes ago      Exited              coredns                   2                   71812c3991758       coredns-7d764666f9-vlpbt                    kube-system
	bb75140ee1acd       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                              11 minutes ago      Exited              kube-proxy                2                   69fa824c620eb       kube-proxy-mlzkt                            kube-system
	c41c46ed24892       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                              11 minutes ago      Exited              coredns                   2                   f2694d814f15a       coredns-7d764666f9-tvjx6                    kube-system
	56fd93bc6d042       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                              11 minutes ago      Exited              kube-scheduler            2                   afad058a661b3       kube-scheduler-functional-452472            kube-system
	24a8c206284f9       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                              11 minutes ago      Exited              etcd                      2                   715e5eeee18df       etcd-functional-452472                      kube-system
	9aaa296c1ec26       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                              11 minutes ago      Exited              kube-controller-manager   2                   b8a4089866435       kube-controller-manager-functional-452472   kube-system
	efc92cf4ba908       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                              13 minutes ago      Exited              storage-provisioner       1                   3a70ccdc3e38a       storage-provisioner                         kube-system
	
	
	==> coredns [89f77421a9c08d419666dd2d4d752ec5754ea00f9553624bf9ea84af84ca1d62] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:60301 - 38524 "HINFO IN 8798269233804889227.6321400277170428478. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.14000854s
	
	
	==> coredns [a38390ce93a173e94bede6cf6421552defd4cc8b49d9d37f2ecf97671f75d6c2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:56828 - 64275 "HINFO IN 4349222604740283638.6489643425740530321. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.070840364s
	
	
	==> coredns [c41c46ed24892773ba743ef8a2e7b0127cf5a0e070f13fa09b5e982909263045] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:39443 - 42903 "HINFO IN 8449925704322377890.3510736395015828867. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.060633654s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e85c94f90b224595b7f5063268abd750dfae8b503a1bd967141af459bce0472d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:36475 - 11326 "HINFO IN 106714535523831428.6237128226454131128. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.079172803s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-452472
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-452472
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=functional-452472
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T08_36_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 08:36:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-452472
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 08:49:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 08:42:31 +0000   Wed, 17 Dec 2025 08:36:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 08:42:31 +0000   Wed, 17 Dec 2025 08:36:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 08:42:31 +0000   Wed, 17 Dec 2025 08:36:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 08:42:31 +0000   Wed, 17 Dec 2025 08:36:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.226
	  Hostname:    functional-452472
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	System Info:
	  Machine ID:                 aca9beaca1a34ffc8821f5e0518ea0b2
	  System UUID:                aca9beac-a1a3-4ffc-8821-f5e0518ea0b2
	  Boot ID:                    e50d8a89-43d4-41a0-a2a5-d33d9ae2bdd7
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-92scj                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-9f67c86d4-w5n8n            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-7d7b65bc95-tv9dk                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    8m44s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m46s
	  kube-system                 coredns-7d764666f9-tvjx6                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 coredns-7d764666f9-vlpbt                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-functional-452472                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-functional-452472              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-452472     200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-mlzkt                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-functional-452472              100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-rm7wm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m39s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-4d69z          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (72%)  700m (35%)
	  memory             752Mi (19%)  1040Mi (26%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  13m   node-controller  Node functional-452472 event: Registered Node functional-452472 in Controller
	  Normal  RegisteredNode  11m   node-controller  Node functional-452472 event: Registered Node functional-452472 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node functional-452472 event: Registered Node functional-452472 in Controller
	
	
	==> dmesg <==
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085003] kauditd_printk_skb: 1 callbacks suppressed
	[Dec17 08:36] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.141015] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.779333] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.183668] kauditd_printk_skb: 269 callbacks suppressed
	[Dec17 08:38] kauditd_printk_skb: 383 callbacks suppressed
	[  +2.029181] kauditd_printk_skb: 312 callbacks suppressed
	[  +5.150512] kauditd_printk_skb: 48 callbacks suppressed
	[  +9.057264] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.731630] kauditd_printk_skb: 12 callbacks suppressed
	[  +3.706233] kauditd_printk_skb: 246 callbacks suppressed
	[  +3.555815] kauditd_printk_skb: 183 callbacks suppressed
	[Dec17 08:39] kauditd_printk_skb: 203 callbacks suppressed
	[  +7.493186] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.000018] kauditd_printk_skb: 133 callbacks suppressed
	[  +0.000568] kauditd_printk_skb: 68 callbacks suppressed
	[ +24.958087] kauditd_printk_skb: 26 callbacks suppressed
	[Dec17 08:40] kauditd_printk_skb: 31 callbacks suppressed
	[  +6.308952] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.633276] kauditd_printk_skb: 109 callbacks suppressed
	[  +2.416709] crun[11653]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[Dec17 08:41] kauditd_printk_skb: 78 callbacks suppressed
	[Dec17 08:42] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [24a8c206284f995e1beab427472536e2d5ac218b538808a3a7eb1194ce0849f8] <==
	{"level":"info","ts":"2025-12-17T08:38:11.672228Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T08:38:11.672326Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T08:38:11.672353Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T08:38:11.672992Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T08:38:11.675260Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T08:38:11.698902Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.226:2379"}
	{"level":"info","ts":"2025-12-17T08:38:11.703081Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-17T08:38:39.186890Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-17T08:38:39.187004Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-452472","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.226:2380"],"advertise-client-urls":["https://192.168.39.226:2379"]}
	{"level":"error","ts":"2025-12-17T08:38:39.187115Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T08:38:39.270248Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T08:38:39.271669Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-17T08:38:39.271720Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-17T08:38:39.271803Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9e3e2863ac888927","current-leader-member-id":"9e3e2863ac888927"}
	{"level":"warn","ts":"2025-12-17T08:38:39.271808Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T08:38:39.271849Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T08:38:39.271861Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-17T08:38:39.271876Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-17T08:38:39.271896Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.226:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T08:38:39.271909Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.226:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T08:38:39.271915Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.226:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T08:38:39.274852Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.226:2380"}
	{"level":"error","ts":"2025-12-17T08:38:39.274944Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.226:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T08:38:39.274964Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.226:2380"}
	{"level":"info","ts":"2025-12-17T08:38:39.274970Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-452472","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.226:2380"],"advertise-client-urls":["https://192.168.39.226:2379"]}
	
	
	==> etcd [eab0b7679f87d69cdc32a244cea1ba39e59fdf91d2f53f6311b9b967026a439d] <==
	{"level":"warn","ts":"2025-12-17T08:41:50.593687Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"190.09122ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T08:41:50.593718Z","caller":"traceutil/trace.go:172","msg":"trace[1633127497] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:965; }","duration":"190.169456ms","start":"2025-12-17T08:41:50.403543Z","end":"2025-12-17T08:41:50.593712Z","steps":["trace[1633127497] 'agreement among raft nodes before linearized reading'  (duration: 190.064531ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:41:50.594001Z","caller":"traceutil/trace.go:172","msg":"trace[1620236463] transaction","detail":"{read_only:false; response_revision:966; number_of_response:1; }","duration":"245.331452ms","start":"2025-12-17T08:41:50.348662Z","end":"2025-12-17T08:41:50.593993Z","steps":["trace[1620236463] 'process raft request'  (duration: 245.245208ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:41:52.059473Z","caller":"traceutil/trace.go:172","msg":"trace[636556231] linearizableReadLoop","detail":"{readStateIndex:1073; appliedIndex:1073; }","duration":"114.176981ms","start":"2025-12-17T08:41:51.945281Z","end":"2025-12-17T08:41:52.059458Z","steps":["trace[636556231] 'read index received'  (duration: 114.172619ms)","trace[636556231] 'applied index is now lower than readState.Index'  (duration: 3.737µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T08:41:52.060008Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.711861ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T08:41:52.060437Z","caller":"traceutil/trace.go:172","msg":"trace[1717204422] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:967; }","duration":"115.149322ms","start":"2025-12-17T08:41:51.945275Z","end":"2025-12-17T08:41:52.060424Z","steps":["trace[1717204422] 'agreement among raft nodes before linearized reading'  (duration: 114.684415ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:41:52.060374Z","caller":"traceutil/trace.go:172","msg":"trace[32704051] transaction","detail":"{read_only:false; response_revision:968; number_of_response:1; }","duration":"250.083359ms","start":"2025-12-17T08:41:51.810130Z","end":"2025-12-17T08:41:52.060214Z","steps":["trace[32704051] 'process raft request'  (duration: 249.92955ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:41:53.615671Z","caller":"traceutil/trace.go:172","msg":"trace[1948319545] linearizableReadLoop","detail":"{readStateIndex:1074; appliedIndex:1074; }","duration":"213.345364ms","start":"2025-12-17T08:41:53.402310Z","end":"2025-12-17T08:41:53.615655Z","steps":["trace[1948319545] 'read index received'  (duration: 213.339938ms)","trace[1948319545] 'applied index is now lower than readState.Index'  (duration: 4.749µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T08:41:53.615776Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"213.450346ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T08:41:53.615794Z","caller":"traceutil/trace.go:172","msg":"trace[391109567] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:968; }","duration":"213.484305ms","start":"2025-12-17T08:41:53.402304Z","end":"2025-12-17T08:41:53.615789Z","steps":["trace[391109567] 'agreement among raft nodes before linearized reading'  (duration: 213.421067ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:41:54.342787Z","caller":"traceutil/trace.go:172","msg":"trace[365550372] linearizableReadLoop","detail":"{readStateIndex:1075; appliedIndex:1075; }","duration":"257.62003ms","start":"2025-12-17T08:41:54.085148Z","end":"2025-12-17T08:41:54.342768Z","steps":["trace[365550372] 'read index received'  (duration: 257.611162ms)","trace[365550372] 'applied index is now lower than readState.Index'  (duration: 5.089µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T08:41:54.342910Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"257.746735ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T08:41:54.342946Z","caller":"traceutil/trace.go:172","msg":"trace[1877976476] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:968; }","duration":"257.795836ms","start":"2025-12-17T08:41:54.085144Z","end":"2025-12-17T08:41:54.342939Z","steps":["trace[1877976476] 'agreement among raft nodes before linearized reading'  (duration: 257.720497ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T08:41:54.343299Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"212.446682ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T08:41:54.343342Z","caller":"traceutil/trace.go:172","msg":"trace[375494248] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:969; }","duration":"212.494944ms","start":"2025-12-17T08:41:54.130839Z","end":"2025-12-17T08:41:54.343334Z","steps":["trace[375494248] 'agreement among raft nodes before linearized reading'  (duration: 212.432628ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:41:54.343572Z","caller":"traceutil/trace.go:172","msg":"trace[1524409626] transaction","detail":"{read_only:false; response_revision:969; number_of_response:1; }","duration":"272.06286ms","start":"2025-12-17T08:41:54.071460Z","end":"2025-12-17T08:41:54.343523Z","steps":["trace[1524409626] 'process raft request'  (duration: 271.675178ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:41:56.529526Z","caller":"traceutil/trace.go:172","msg":"trace[1948361033] transaction","detail":"{read_only:false; response_revision:981; number_of_response:1; }","duration":"175.272939ms","start":"2025-12-17T08:41:56.354239Z","end":"2025-12-17T08:41:56.529512Z","steps":["trace[1948361033] 'process raft request'  (duration: 175.143067ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:41:56.529998Z","caller":"traceutil/trace.go:172","msg":"trace[253558163] linearizableReadLoop","detail":"{readStateIndex:1087; appliedIndex:1087; }","duration":"127.440992ms","start":"2025-12-17T08:41:56.401836Z","end":"2025-12-17T08:41:56.529277Z","steps":["trace[253558163] 'read index received'  (duration: 127.433326ms)","trace[253558163] 'applied index is now lower than readState.Index'  (duration: 7.11µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T08:41:56.531438Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.585008ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T08:41:56.532745Z","caller":"traceutil/trace.go:172","msg":"trace[292755331] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:981; }","duration":"130.903099ms","start":"2025-12-17T08:41:56.401832Z","end":"2025-12-17T08:41:56.532735Z","steps":["trace[292755331] 'agreement among raft nodes before linearized reading'  (duration: 128.615645ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T08:42:00.962872Z","caller":"traceutil/trace.go:172","msg":"trace[1307327592] transaction","detail":"{read_only:false; response_revision:984; number_of_response:1; }","duration":"382.715082ms","start":"2025-12-17T08:42:00.580142Z","end":"2025-12-17T08:42:00.962857Z","steps":["trace[1307327592] 'process raft request'  (duration: 381.292937ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T08:42:00.964274Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T08:42:00.580129Z","time spent":"382.855115ms","remote":"127.0.0.1:57126","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:983 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-12-17T08:48:55.174676Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1105}
	{"level":"info","ts":"2025-12-17T08:48:55.198465Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1105,"took":"22.982631ms","hash":2728550868,"current-db-size-bytes":3543040,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1609728,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-12-17T08:48:55.198513Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2728550868,"revision":1105,"compact-revision":-1}
	
	
	==> kernel <==
	 08:49:18 up 13 min,  0 users,  load average: 0.56, 0.48, 0.34
	Linux functional-452472 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 03:41:16 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [afae314e118b19ff9c06569c5a14f8f91ab46fec1e8130e020e703f720115437] <==
	I1217 08:38:56.536814       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1217 08:38:56.558381       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 08:38:56.640070       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 08:38:57.344110       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 08:38:58.358416       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 08:38:58.395793       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 08:38:58.427807       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 08:38:58.437323       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 08:39:00.019393       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 08:39:00.070586       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 08:39:00.119536       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 08:39:12.099203       1 alloc.go:329] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.47.213"}
	I1217 08:39:16.395273       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.116.62"}
	I1217 08:39:17.059860       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.43.167"}
	E1217 08:40:31.206675       1 conn.go:339] Error on socket receive: read tcp 192.168.39.226:8441->192.168.39.1:52190: use of closed network connection
	I1217 08:40:34.046283       1 alloc.go:329] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.6.220"}
	E1217 08:40:38.936021       1 conn.go:339] Error on socket receive: read tcp 192.168.39.226:8441->192.168.39.1:55152: use of closed network connection
	I1217 08:40:39.805642       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 08:40:40.059571       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.57.122"}
	I1217 08:40:40.078857       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.159.11"}
	E1217 08:42:02.246281       1 conn.go:339] Error on socket receive: read tcp 192.168.39.226:8441->192.168.39.1:41266: use of closed network connection
	E1217 08:42:03.172407       1 conn.go:339] Error on socket receive: read tcp 192.168.39.226:8441->192.168.39.1:41272: use of closed network connection
	E1217 08:42:04.109439       1 conn.go:339] Error on socket receive: read tcp 192.168.39.226:8441->192.168.39.1:41294: use of closed network connection
	E1217 08:42:06.639304       1 conn.go:339] Error on socket receive: read tcp 192.168.39.226:8441->192.168.39.1:41312: use of closed network connection
	I1217 08:48:56.440059       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [9aaa296c1ec268580ef35dfb6084b2d11765a0551fd3025f7cb535ed69d8c036] <==
	I1217 08:38:16.306809       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.306853       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.306898       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.306969       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307062       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307100       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307241       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307292       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307313       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307327       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307355       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307378       1 range_allocator.go:177] "Sending events to api server"
	I1217 08:38:16.307410       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1217 08:38:16.307414       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:38:16.307417       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307493       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307546       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307588       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.307641       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.311902       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:38:16.332293       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.402703       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:16.402719       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 08:38:16.402723       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 08:38:16.412817       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [d8e314a4cd7ebcb67b1f912124f948ae99758af84ebad9776b3b3602546d454b] <==
	I1217 08:38:59.632912       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.633011       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.633107       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.633213       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.633260       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.633365       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.633455       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.629646       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.629654       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.634044       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.629747       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.635307       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.647256       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:38:59.675410       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.731265       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:59.731333       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 08:38:59.731354       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 08:38:59.747401       1 shared_informer.go:377] "Caches are synced"
	E1217 08:40:39.904682       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:40:39.907328       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:40:39.912630       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:40:39.921492       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:40:39.921881       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:40:39.934450       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 08:40:39.938224       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [66a79860283a5d688bfe587f86250a792d36d69e5ac6ad4f9fb2f0d883b408cb] <==
	I1217 08:38:57.928683       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:38:58.031590       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:58.031630       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.226"]
	E1217 08:38:58.031700       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:38:58.165581       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 08:38:58.166145       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 08:38:58.166295       1 server_linux.go:136] "Using iptables Proxier"
	I1217 08:38:58.184255       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:38:58.189234       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 08:38:58.189248       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:38:58.200254       1 config.go:200] "Starting service config controller"
	I1217 08:38:58.200265       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:38:58.200334       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:38:58.200339       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:38:58.200368       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:38:58.200373       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:38:58.204656       1 config.go:309] "Starting node config controller"
	I1217 08:38:58.204787       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:38:58.204796       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 08:38:58.301605       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 08:38:58.301644       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 08:38:58.301678       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [bb75140ee1acd6e5714883d02293293abb7231572ea9276571847bb58080c36e] <==
	I1217 08:38:13.970604       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:38:14.070869       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:14.071069       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.226"]
	E1217 08:38:14.071402       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 08:38:14.114493       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 08:38:14.114676       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 08:38:14.114754       1 server_linux.go:136] "Using iptables Proxier"
	I1217 08:38:14.126138       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 08:38:14.126444       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 08:38:14.126470       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:38:14.132669       1 config.go:200] "Starting service config controller"
	I1217 08:38:14.134078       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 08:38:14.134499       1 config.go:106] "Starting endpoint slice config controller"
	I1217 08:38:14.134510       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 08:38:14.134521       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 08:38:14.134524       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 08:38:14.135016       1 config.go:309] "Starting node config controller"
	I1217 08:38:14.135023       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 08:38:14.135080       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 08:38:14.235110       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 08:38:14.235274       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 08:38:14.234998       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4bbc4201b4702bae790066874d19fed5d287ad766c42b9fa697cc27070c56ab9] <==
	I1217 08:38:55.027638       1 serving.go:386] Generated self-signed cert in-memory
	W1217 08:38:56.408069       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 08:38:56.408238       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 08:38:56.408328       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 08:38:56.408351       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 08:38:56.454683       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1217 08:38:56.454824       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:38:56.459353       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 08:38:56.460003       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:38:56.460036       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:38:56.460052       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 08:38:56.560261       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [56fd93bc6d042ffc6d5e70b53079a47da3495248a5349c01d8f3a3ac359bf531] <==
	I1217 08:38:11.964464       1 serving.go:386] Generated self-signed cert in-memory
	W1217 08:38:13.111726       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 08:38:13.111820       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 08:38:13.111829       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 08:38:13.111891       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 08:38:13.166743       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1217 08:38:13.166776       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 08:38:13.180382       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 08:38:13.180543       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:38:13.180571       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 08:38:13.180591       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 08:38:13.281230       1 shared_informer.go:377] "Caches are synced"
	I1217 08:38:39.198775       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1217 08:38:39.198828       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1217 08:38:39.198853       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1217 08:38:39.198895       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 08:38:39.199270       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1217 08:38:39.199327       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 17 08:48:53 functional-452472 kubelet[7366]: E1217 08:48:53.688377    7366 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod59a64983b2a6e32426f2b85bf4025ab6/crio-b8a40898664357842f4a01c490a316886b9502b89edcb571e646fed65e16d0b9: Error finding container b8a40898664357842f4a01c490a316886b9502b89edcb571e646fed65e16d0b9: Status 404 returned error can't find the container with id b8a40898664357842f4a01c490a316886b9502b89edcb571e646fed65e16d0b9
	Dec 17 08:48:53 functional-452472 kubelet[7366]: E1217 08:48:53.689048    7366 manager.go:1119] Failed to create existing container: /kubepods/besteffort/podcc28e214-79dc-4410-9e19-5e01dc8c177e/crio-3a70ccdc3e38a0bd793053d58cf9bef42ae122e408cbe7c66674d9eadbdff820: Error finding container 3a70ccdc3e38a0bd793053d58cf9bef42ae122e408cbe7c66674d9eadbdff820: Status 404 returned error can't find the container with id 3a70ccdc3e38a0bd793053d58cf9bef42ae122e408cbe7c66674d9eadbdff820
	Dec 17 08:48:53 functional-452472 kubelet[7366]: E1217 08:48:53.689447    7366 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod5767dc2c-d2a7-40df-9980-cf0eb5099135/crio-71812c39917583f284a2a4ed5532a76e8241ef9f313532cbbb5102c9f47a0aa6: Error finding container 71812c39917583f284a2a4ed5532a76e8241ef9f313532cbbb5102c9f47a0aa6: Status 404 returned error can't find the container with id 71812c39917583f284a2a4ed5532a76e8241ef9f313532cbbb5102c9f47a0aa6
	Dec 17 08:48:53 functional-452472 kubelet[7366]: E1217 08:48:53.689786    7366 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod5c66f6f2e75aeefdfac5925984824a19/crio-715e5eeee18dfc4b463d53d93a80cf8ab421378058d8c21e32071b228d5553f3: Error finding container 715e5eeee18dfc4b463d53d93a80cf8ab421378058d8c21e32071b228d5553f3: Status 404 returned error can't find the container with id 715e5eeee18dfc4b463d53d93a80cf8ab421378058d8c21e32071b228d5553f3
	Dec 17 08:48:53 functional-452472 kubelet[7366]: E1217 08:48:53.690207    7366 manager.go:1119] Failed to create existing container: /kubepods/burstable/podd663db3988802dd0b7f3a700e2703644/crio-afad058a661b32de13f3638ac69bece90b9fa781f8c8dc6edb25b23ddb026fc0: Error finding container afad058a661b32de13f3638ac69bece90b9fa781f8c8dc6edb25b23ddb026fc0: Status 404 returned error can't find the container with id afad058a661b32de13f3638ac69bece90b9fa781f8c8dc6edb25b23ddb026fc0
	Dec 17 08:48:53 functional-452472 kubelet[7366]: E1217 08:48:53.690484    7366 manager.go:1119] Failed to create existing container: /kubepods/besteffort/pod006580b2-c5aa-46f2-a109-0b4e4293a31d/crio-69fa824c620eb19adc3f011c88a7a2667aaf4b0454d144f44b6057ca46b8c0e4: Error finding container 69fa824c620eb19adc3f011c88a7a2667aaf4b0454d144f44b6057ca46b8c0e4: Status 404 returned error can't find the container with id 69fa824c620eb19adc3f011c88a7a2667aaf4b0454d144f44b6057ca46b8c0e4
	Dec 17 08:48:53 functional-452472 kubelet[7366]: E1217 08:48:53.690735    7366 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod44ccdb9e-3552-4d0c-aa79-209aa4bc384e/crio-f2694d814f15a1423255f88ffab3a14679cad607b610b3d46cae4f97aba353e4: Error finding container f2694d814f15a1423255f88ffab3a14679cad607b610b3d46cae4f97aba353e4: Status 404 returned error can't find the container with id f2694d814f15a1423255f88ffab3a14679cad607b610b3d46cae4f97aba353e4
	Dec 17 08:48:54 functional-452472 kubelet[7366]: E1217 08:48:54.080477    7366 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765961334079978607  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242163}  inodes_used:{value:113}}"
	Dec 17 08:48:54 functional-452472 kubelet[7366]: E1217 08:48:54.080521    7366 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765961334079978607  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242163}  inodes_used:{value:113}}"
	Dec 17 08:48:54 functional-452472 kubelet[7366]: E1217 08:48:54.293910    7366 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 17 08:48:54 functional-452472 kubelet[7366]: E1217 08:48:54.293974    7366 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 17 08:48:54 functional-452472 kubelet[7366]: E1217 08:48:54.295335    7366 kuberuntime_manager.go:1664] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-5565989548-rm7wm_kubernetes-dashboard(2ed28554-e147-40f1-9c98-22fee95237ba): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 17 08:48:54 functional-452472 kubelet[7366]: E1217 08:48:54.295437    7366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-rm7wm" podUID="2ed28554-e147-40f1-9c98-22fee95237ba"
	Dec 17 08:48:56 functional-452472 kubelet[7366]: E1217 08:48:56.574968    7366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-92scj" podUID="764ca098-86c6-4aef-8662-0d99cec3f081"
	Dec 17 08:49:01 functional-452472 kubelet[7366]: E1217 08:49:01.577689    7366 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-452472" containerName="kube-scheduler"
	Dec 17 08:49:03 functional-452472 kubelet[7366]: E1217 08:49:03.575383    7366 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-452472" containerName="kube-apiserver"
	Dec 17 08:49:04 functional-452472 kubelet[7366]: E1217 08:49:04.082955    7366 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765961344082508471  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242163}  inodes_used:{value:113}}"
	Dec 17 08:49:04 functional-452472 kubelet[7366]: E1217 08:49:04.083033    7366 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765961344082508471  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242163}  inodes_used:{value:113}}"
	Dec 17 08:49:05 functional-452472 kubelet[7366]: E1217 08:49:05.574739    7366 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vlpbt" containerName="coredns"
	Dec 17 08:49:06 functional-452472 kubelet[7366]: E1217 08:49:06.573863    7366 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-rm7wm" containerName="dashboard-metrics-scraper"
	Dec 17 08:49:06 functional-452472 kubelet[7366]: E1217 08:49:06.575497    7366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-rm7wm" podUID="2ed28554-e147-40f1-9c98-22fee95237ba"
	Dec 17 08:49:09 functional-452472 kubelet[7366]: E1217 08:49:09.574565    7366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-92scj" podUID="764ca098-86c6-4aef-8662-0d99cec3f081"
	Dec 17 08:49:14 functional-452472 kubelet[7366]: E1217 08:49:14.085256    7366 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765961354084818066  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242163}  inodes_used:{value:113}}"
	Dec 17 08:49:14 functional-452472 kubelet[7366]: E1217 08:49:14.085279    7366 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765961354084818066  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:242163}  inodes_used:{value:113}}"
	Dec 17 08:49:18 functional-452472 kubelet[7366]: E1217 08:49:18.574494    7366 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-452472" containerName="kube-controller-manager"
	
	
	==> storage-provisioner [1ae20198e715809c55c5aa1cc0b3381b754821cfe0b0c46ae53cae097d05216e] <==
	W1217 08:48:53.071706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:48:55.074621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:48:55.078881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:48:57.083892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:48:57.089433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:48:59.092648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:48:59.102623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:49:01.105212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:49:01.110873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:49:03.115360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:49:03.129257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:49:05.132114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:49:05.136074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:49:07.138946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:49:07.143552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:49:09.147382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:49:09.152287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:49:11.155876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:49:11.160870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:49:13.164914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:49:13.170358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:49:15.173274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:49:15.182231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:49:17.186027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:49:17.190456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [efc92cf4ba908fee23149245dbc756291b252ab89d06ae1cd620e3763c633f12] <==
	I1217 08:36:14.322109       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 08:36:14.333377       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 08:36:14.333426       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 08:36:14.335878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:14.342037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:36:14.342382       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 08:36:14.342958       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"376db418-cc6c-431a-aa36-733dd71501f9", APIVersion:"v1", ResourceVersion:"385", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-452472_c8a0e0d7-2363-48c0-9a83-6b882add5351 became leader
	I1217 08:36:14.343002       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-452472_c8a0e0d7-2363-48c0-9a83-6b882add5351!
	W1217 08:36:14.344850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:14.352919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 08:36:14.443397       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-452472_c8a0e0d7-2363-48c0-9a83-6b882add5351!
	W1217 08:36:16.355934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:16.362782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:18.365827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:18.371143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:20.374941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:20.382691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:22.386270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:22.395556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:24.399296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:24.404106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:26.407368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 08:36:26.412414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-452472 -n functional-452472
helpers_test.go:270: (dbg) Run:  kubectl --context functional-452472 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-5758569b79-92scj hello-node-connect-9f67c86d4-w5n8n dashboard-metrics-scraper-5565989548-rm7wm kubernetes-dashboard-b84665fb8-4d69z
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-452472 describe pod busybox-mount hello-node-5758569b79-92scj hello-node-connect-9f67c86d4-w5n8n dashboard-metrics-scraper-5565989548-rm7wm kubernetes-dashboard-b84665fb8-4d69z
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-452472 describe pod busybox-mount hello-node-5758569b79-92scj hello-node-connect-9f67c86d4-w5n8n dashboard-metrics-scraper-5565989548-rm7wm kubernetes-dashboard-b84665fb8-4d69z: exit status 1 (110.007474ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-452472/192.168.39.226
	Start Time:       Wed, 17 Dec 2025 08:39:20 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.15
	IPs:
	  IP:  10.244.0.15
	Containers:
	  mount-munger:
	    Container ID:  cri-o://1bafea12ac56a396c9e3c5b4985c434f4aa90ca96f2043cf0b42335c4ccfaee4
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 17 Dec 2025 08:40:19 +0000
	      Finished:     Wed, 17 Dec 2025 08:40:19 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vq2f7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-vq2f7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m59s  default-scheduler  Successfully assigned default/busybox-mount to functional-452472
	  Normal  Pulling    9m59s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m     kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.038s (58.962s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m     kubelet            Container created
	  Normal  Started    9m     kubelet            Container started
	
	
	Name:             hello-node-5758569b79-92scj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-452472/192.168.39.226
	Start Time:       Wed, 17 Dec 2025 08:39:16 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mmxsc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-mmxsc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-5758569b79-92scj to functional-452472
	  Warning  Failed     9m31s                 kubelet            Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m22s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     86s (x5 over 9m31s)   kubelet            Error: ErrImagePull
	  Warning  Failed     86s (x4 over 8m24s)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    10s (x16 over 9m31s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     10s (x16 over 9m31s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-9f67c86d4-w5n8n
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-452472/192.168.39.226
	Start Time:       Wed, 17 Dec 2025 08:39:17 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.14
	IPs:
	  IP:           10.244.0.14
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-26h8s (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-26h8s:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-w5n8n to functional-452472
	  Warning  Failed     9m1s                   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m58s (x4 over 9m1s)   kubelet            Error: ErrImagePull
	  Warning  Failed     2m58s (x3 over 7m39s)  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    102s (x11 over 9m)     kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     102s (x11 over 9m)     kubelet            Error: ImagePullBackOff
	  Normal   Pulling    89s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-rm7wm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-4d69z" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-452472 describe pod busybox-mount hello-node-5758569b79-92scj hello-node-connect-9f67c86d4-w5n8n dashboard-metrics-scraper-5565989548-rm7wm kubernetes-dashboard-b84665fb8-4d69z: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (602.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-452472 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-452472 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-92scj" [764ca098-86c6-4aef-8662-0d99cec3f081] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-452472 -n functional-452472
functional_test.go:1460: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-17 08:49:16.637122578 +0000 UTC m=+2031.372319927
functional_test.go:1460: (dbg) Run:  kubectl --context functional-452472 describe po hello-node-5758569b79-92scj -n default
functional_test.go:1460: (dbg) kubectl --context functional-452472 describe po hello-node-5758569b79-92scj -n default:
Name:             hello-node-5758569b79-92scj
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-452472/192.168.39.226
Start Time:       Wed, 17 Dec 2025 08:39:16 +0000
Labels:           app=hello-node
pod-template-hash=5758569b79
Annotations:      <none>
Status:           Pending
IP:               10.244.0.13
IPs:
IP:           10.244.0.13
Controlled By:  ReplicaSet/hello-node-5758569b79
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mmxsc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-mmxsc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-5758569b79-92scj to functional-452472
Warning  Failed     9m28s                kubelet            Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m19s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     83s (x5 over 9m28s)  kubelet            Error: ErrImagePull
Warning  Failed     83s (x4 over 8m21s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    7s (x16 over 9m28s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     7s (x16 over 9m28s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-452472 logs hello-node-5758569b79-92scj -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-452472 logs hello-node-5758569b79-92scj -n default: exit status 1 (75.762374ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-5758569b79-92scj" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-452472 logs hello-node-5758569b79-92scj -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452472 service --namespace=default --https --url hello-node: exit status 115 (263.825325ms)

                                                
                                                
-- stdout --
	https://192.168.39.226:31322
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-452472 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452472 service hello-node --url --format={{.IP}}: exit status 115 (279.614037ms)

                                                
                                                
-- stdout --
	192.168.39.226
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-452472 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452472 service hello-node --url: exit status 115 (240.83069ms)

                                                
                                                
-- stdout --
	http://192.168.39.226:31322
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-452472 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.226:31322
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.24s)

                                                
                                    
x
+
TestPreload (117.29s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-147081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1217 09:22:45.365434  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-147081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m4.640715986s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-147081 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-147081 image pull gcr.io/k8s-minikube/busybox: (1.201368839s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-147081
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-147081: (6.886448117s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-147081 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-147081 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (41.849704516s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-147081 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.3
	registry.k8s.io/kube-proxy:v1.34.3
	registry.k8s.io/kube-controller-manager:v1.34.3
	registry.k8s.io/kube-apiserver:v1.34.3
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-17 09:24:01.403345809 +0000 UTC m=+4116.138543160
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-147081 -n test-preload-147081
helpers_test.go:253: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-147081 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p test-preload-147081 logs -n 25: (1.079702778s)
helpers_test.go:261: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-046714 ssh -n multinode-046714-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-046714     │ jenkins │ v1.37.0 │ 17 Dec 25 09:11 UTC │ 17 Dec 25 09:11 UTC │
	│ ssh     │ multinode-046714 ssh -n multinode-046714 sudo cat /home/docker/cp-test_multinode-046714-m03_multinode-046714.txt                                          │ multinode-046714     │ jenkins │ v1.37.0 │ 17 Dec 25 09:11 UTC │ 17 Dec 25 09:11 UTC │
	│ cp      │ multinode-046714 cp multinode-046714-m03:/home/docker/cp-test.txt multinode-046714-m02:/home/docker/cp-test_multinode-046714-m03_multinode-046714-m02.txt │ multinode-046714     │ jenkins │ v1.37.0 │ 17 Dec 25 09:11 UTC │ 17 Dec 25 09:11 UTC │
	│ ssh     │ multinode-046714 ssh -n multinode-046714-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-046714     │ jenkins │ v1.37.0 │ 17 Dec 25 09:11 UTC │ 17 Dec 25 09:11 UTC │
	│ ssh     │ multinode-046714 ssh -n multinode-046714-m02 sudo cat /home/docker/cp-test_multinode-046714-m03_multinode-046714-m02.txt                                  │ multinode-046714     │ jenkins │ v1.37.0 │ 17 Dec 25 09:11 UTC │ 17 Dec 25 09:11 UTC │
	│ node    │ multinode-046714 node stop m03                                                                                                                            │ multinode-046714     │ jenkins │ v1.37.0 │ 17 Dec 25 09:11 UTC │ 17 Dec 25 09:11 UTC │
	│ node    │ multinode-046714 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-046714     │ jenkins │ v1.37.0 │ 17 Dec 25 09:11 UTC │ 17 Dec 25 09:12 UTC │
	│ node    │ list -p multinode-046714                                                                                                                                  │ multinode-046714     │ jenkins │ v1.37.0 │ 17 Dec 25 09:12 UTC │                     │
	│ stop    │ -p multinode-046714                                                                                                                                       │ multinode-046714     │ jenkins │ v1.37.0 │ 17 Dec 25 09:12 UTC │ 17 Dec 25 09:14 UTC │
	│ start   │ -p multinode-046714 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-046714     │ jenkins │ v1.37.0 │ 17 Dec 25 09:14 UTC │ 17 Dec 25 09:17 UTC │
	│ node    │ list -p multinode-046714                                                                                                                                  │ multinode-046714     │ jenkins │ v1.37.0 │ 17 Dec 25 09:17 UTC │                     │
	│ node    │ multinode-046714 node delete m03                                                                                                                          │ multinode-046714     │ jenkins │ v1.37.0 │ 17 Dec 25 09:17 UTC │ 17 Dec 25 09:17 UTC │
	│ stop    │ multinode-046714 stop                                                                                                                                     │ multinode-046714     │ jenkins │ v1.37.0 │ 17 Dec 25 09:17 UTC │ 17 Dec 25 09:20 UTC │
	│ start   │ -p multinode-046714 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-046714     │ jenkins │ v1.37.0 │ 17 Dec 25 09:20 UTC │ 17 Dec 25 09:21 UTC │
	│ node    │ list -p multinode-046714                                                                                                                                  │ multinode-046714     │ jenkins │ v1.37.0 │ 17 Dec 25 09:21 UTC │                     │
	│ start   │ -p multinode-046714-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-046714-m02 │ jenkins │ v1.37.0 │ 17 Dec 25 09:21 UTC │                     │
	│ start   │ -p multinode-046714-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-046714-m03 │ jenkins │ v1.37.0 │ 17 Dec 25 09:21 UTC │ 17 Dec 25 09:22 UTC │
	│ node    │ add -p multinode-046714                                                                                                                                   │ multinode-046714     │ jenkins │ v1.37.0 │ 17 Dec 25 09:22 UTC │                     │
	│ delete  │ -p multinode-046714-m03                                                                                                                                   │ multinode-046714-m03 │ jenkins │ v1.37.0 │ 17 Dec 25 09:22 UTC │ 17 Dec 25 09:22 UTC │
	│ delete  │ -p multinode-046714                                                                                                                                       │ multinode-046714     │ jenkins │ v1.37.0 │ 17 Dec 25 09:22 UTC │ 17 Dec 25 09:22 UTC │
	│ start   │ -p test-preload-147081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-147081  │ jenkins │ v1.37.0 │ 17 Dec 25 09:22 UTC │ 17 Dec 25 09:23 UTC │
	│ image   │ test-preload-147081 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-147081  │ jenkins │ v1.37.0 │ 17 Dec 25 09:23 UTC │ 17 Dec 25 09:23 UTC │
	│ stop    │ -p test-preload-147081                                                                                                                                    │ test-preload-147081  │ jenkins │ v1.37.0 │ 17 Dec 25 09:23 UTC │ 17 Dec 25 09:23 UTC │
	│ start   │ -p test-preload-147081 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-147081  │ jenkins │ v1.37.0 │ 17 Dec 25 09:23 UTC │ 17 Dec 25 09:24 UTC │
	│ image   │ test-preload-147081 image list                                                                                                                            │ test-preload-147081  │ jenkins │ v1.37.0 │ 17 Dec 25 09:24 UTC │ 17 Dec 25 09:24 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 09:23:19
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 09:23:19.411583  926762 out.go:360] Setting OutFile to fd 1 ...
	I1217 09:23:19.411722  926762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:23:19.411733  926762 out.go:374] Setting ErrFile to fd 2...
	I1217 09:23:19.411737  926762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:23:19.411959  926762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 09:23:19.412420  926762 out.go:368] Setting JSON to false
	I1217 09:23:19.413377  926762 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":14745,"bootTime":1765948654,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 09:23:19.413438  926762 start.go:143] virtualization: kvm guest
	I1217 09:23:19.415653  926762 out.go:179] * [test-preload-147081] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 09:23:19.417033  926762 notify.go:221] Checking for updates...
	I1217 09:23:19.417053  926762 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 09:23:19.418455  926762 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 09:23:19.419857  926762 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	I1217 09:23:19.421260  926762 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 09:23:19.422682  926762 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 09:23:19.424044  926762 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 09:23:19.425774  926762 config.go:182] Loaded profile config "test-preload-147081": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 09:23:19.426237  926762 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 09:23:19.460094  926762 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 09:23:19.461304  926762 start.go:309] selected driver: kvm2
	I1217 09:23:19.461335  926762 start.go:927] validating driver "kvm2" against &{Name:test-preload-147081 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.3 ClusterName:test-preload-147081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 09:23:19.461441  926762 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 09:23:19.462384  926762 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 09:23:19.462424  926762 cni.go:84] Creating CNI manager for ""
	I1217 09:23:19.462584  926762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 09:23:19.462662  926762 start.go:353] cluster config:
	{Name:test-preload-147081 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:test-preload-147081 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 09:23:19.462760  926762 iso.go:125] acquiring lock: {Name:mk258687bf3be9c6817f84af5b9e08a4f47b5420 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 09:23:19.464192  926762 out.go:179] * Starting "test-preload-147081" primary control-plane node in "test-preload-147081" cluster
	I1217 09:23:19.465323  926762 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 09:23:19.465351  926762 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-893359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 09:23:19.465370  926762 cache.go:65] Caching tarball of preloaded images
	I1217 09:23:19.465463  926762 preload.go:238] Found /home/jenkins/minikube-integration/22182-893359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 09:23:19.465483  926762 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 09:23:19.465614  926762 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/test-preload-147081/config.json ...
	I1217 09:23:19.465851  926762 start.go:360] acquireMachinesLock for test-preload-147081: {Name:mkdc91ccb2d66cdada71da88e972b4d333b7f63c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 09:23:19.465902  926762 start.go:364] duration metric: took 29.297µs to acquireMachinesLock for "test-preload-147081"
	I1217 09:23:19.465923  926762 start.go:96] Skipping create...Using existing machine configuration
	I1217 09:23:19.465931  926762 fix.go:54] fixHost starting: 
	I1217 09:23:19.467830  926762 fix.go:112] recreateIfNeeded on test-preload-147081: state=Stopped err=<nil>
	W1217 09:23:19.467862  926762 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 09:23:19.469373  926762 out.go:252] * Restarting existing kvm2 VM for "test-preload-147081" ...
	I1217 09:23:19.469400  926762 main.go:143] libmachine: starting domain...
	I1217 09:23:19.469408  926762 main.go:143] libmachine: ensuring networks are active...
	I1217 09:23:19.470271  926762 main.go:143] libmachine: Ensuring network default is active
	I1217 09:23:19.470661  926762 main.go:143] libmachine: Ensuring network mk-test-preload-147081 is active
	I1217 09:23:19.471154  926762 main.go:143] libmachine: getting domain XML...
	I1217 09:23:19.472361  926762 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-147081</name>
	  <uuid>40941968-c443-4559-b3b9-eeb44b068573</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22182-893359/.minikube/machines/test-preload-147081/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22182-893359/.minikube/machines/test-preload-147081/test-preload-147081.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:ac:53:2f'/>
	      <source network='mk-test-preload-147081'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:22:5a:ef'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1217 09:23:20.816744  926762 main.go:143] libmachine: waiting for domain to start...
	I1217 09:23:20.818332  926762 main.go:143] libmachine: domain is now running
	I1217 09:23:20.818349  926762 main.go:143] libmachine: waiting for IP...
	I1217 09:23:20.819300  926762 main.go:143] libmachine: domain test-preload-147081 has defined MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:20.819948  926762 main.go:143] libmachine: domain test-preload-147081 has current primary IP address 192.168.39.188 and MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:20.819961  926762 main.go:143] libmachine: found domain IP: 192.168.39.188
	I1217 09:23:20.819966  926762 main.go:143] libmachine: reserving static IP address...
	I1217 09:23:20.820369  926762 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-147081", mac: "52:54:00:ac:53:2f", ip: "192.168.39.188"} in network mk-test-preload-147081: {Iface:virbr1 ExpiryTime:2025-12-17 10:22:21 +0000 UTC Type:0 Mac:52:54:00:ac:53:2f Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-147081 Clientid:01:52:54:00:ac:53:2f}
	I1217 09:23:20.820397  926762 main.go:143] libmachine: skip adding static IP to network mk-test-preload-147081 - found existing host DHCP lease matching {name: "test-preload-147081", mac: "52:54:00:ac:53:2f", ip: "192.168.39.188"}
	I1217 09:23:20.820408  926762 main.go:143] libmachine: reserved static IP address 192.168.39.188 for domain test-preload-147081
	I1217 09:23:20.820419  926762 main.go:143] libmachine: waiting for SSH...
	I1217 09:23:20.820427  926762 main.go:143] libmachine: Getting to WaitForSSH function...
	I1217 09:23:20.822685  926762 main.go:143] libmachine: domain test-preload-147081 has defined MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:20.822960  926762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:53:2f", ip: ""} in network mk-test-preload-147081: {Iface:virbr1 ExpiryTime:2025-12-17 10:22:21 +0000 UTC Type:0 Mac:52:54:00:ac:53:2f Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-147081 Clientid:01:52:54:00:ac:53:2f}
	I1217 09:23:20.822979  926762 main.go:143] libmachine: domain test-preload-147081 has defined IP address 192.168.39.188 and MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:20.823190  926762 main.go:143] libmachine: Using SSH client type: native
	I1217 09:23:20.823315  926762 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1217 09:23:20.823331  926762 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1217 09:23:23.872844  926762 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.188:22: connect: no route to host
	I1217 09:23:29.952998  926762 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.188:22: connect: no route to host
	I1217 09:23:33.073670  926762 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 09:23:33.077241  926762 main.go:143] libmachine: domain test-preload-147081 has defined MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:33.077785  926762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:53:2f", ip: ""} in network mk-test-preload-147081: {Iface:virbr1 ExpiryTime:2025-12-17 10:23:31 +0000 UTC Type:0 Mac:52:54:00:ac:53:2f Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-147081 Clientid:01:52:54:00:ac:53:2f}
	I1217 09:23:33.077823  926762 main.go:143] libmachine: domain test-preload-147081 has defined IP address 192.168.39.188 and MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:33.078117  926762 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/test-preload-147081/config.json ...
	I1217 09:23:33.078340  926762 machine.go:94] provisionDockerMachine start ...
	I1217 09:23:33.080753  926762 main.go:143] libmachine: domain test-preload-147081 has defined MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:33.081186  926762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:53:2f", ip: ""} in network mk-test-preload-147081: {Iface:virbr1 ExpiryTime:2025-12-17 10:23:31 +0000 UTC Type:0 Mac:52:54:00:ac:53:2f Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-147081 Clientid:01:52:54:00:ac:53:2f}
	I1217 09:23:33.081210  926762 main.go:143] libmachine: domain test-preload-147081 has defined IP address 192.168.39.188 and MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:33.081374  926762 main.go:143] libmachine: Using SSH client type: native
	I1217 09:23:33.081445  926762 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1217 09:23:33.081454  926762 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 09:23:33.209682  926762 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1217 09:23:33.209714  926762 buildroot.go:166] provisioning hostname "test-preload-147081"
	I1217 09:23:33.212897  926762 main.go:143] libmachine: domain test-preload-147081 has defined MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:33.213321  926762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:53:2f", ip: ""} in network mk-test-preload-147081: {Iface:virbr1 ExpiryTime:2025-12-17 10:23:31 +0000 UTC Type:0 Mac:52:54:00:ac:53:2f Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-147081 Clientid:01:52:54:00:ac:53:2f}
	I1217 09:23:33.213344  926762 main.go:143] libmachine: domain test-preload-147081 has defined IP address 192.168.39.188 and MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:33.213524  926762 main.go:143] libmachine: Using SSH client type: native
	I1217 09:23:33.213623  926762 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1217 09:23:33.213636  926762 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-147081 && echo "test-preload-147081" | sudo tee /etc/hostname
	I1217 09:23:33.360243  926762 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-147081
	
	I1217 09:23:33.363430  926762 main.go:143] libmachine: domain test-preload-147081 has defined MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:33.363831  926762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:53:2f", ip: ""} in network mk-test-preload-147081: {Iface:virbr1 ExpiryTime:2025-12-17 10:23:31 +0000 UTC Type:0 Mac:52:54:00:ac:53:2f Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-147081 Clientid:01:52:54:00:ac:53:2f}
	I1217 09:23:33.363857  926762 main.go:143] libmachine: domain test-preload-147081 has defined IP address 192.168.39.188 and MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:33.364030  926762 main.go:143] libmachine: Using SSH client type: native
	I1217 09:23:33.364152  926762 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1217 09:23:33.364171  926762 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-147081' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-147081/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-147081' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 09:23:33.487193  926762 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 09:23:33.487225  926762 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22182-893359/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-893359/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-893359/.minikube}
	I1217 09:23:33.487260  926762 buildroot.go:174] setting up certificates
	I1217 09:23:33.487272  926762 provision.go:84] configureAuth start
	I1217 09:23:33.490026  926762 main.go:143] libmachine: domain test-preload-147081 has defined MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:33.490361  926762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:53:2f", ip: ""} in network mk-test-preload-147081: {Iface:virbr1 ExpiryTime:2025-12-17 10:23:31 +0000 UTC Type:0 Mac:52:54:00:ac:53:2f Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-147081 Clientid:01:52:54:00:ac:53:2f}
	I1217 09:23:33.490381  926762 main.go:143] libmachine: domain test-preload-147081 has defined IP address 192.168.39.188 and MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:33.492885  926762 main.go:143] libmachine: domain test-preload-147081 has defined MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:33.493195  926762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:53:2f", ip: ""} in network mk-test-preload-147081: {Iface:virbr1 ExpiryTime:2025-12-17 10:23:31 +0000 UTC Type:0 Mac:52:54:00:ac:53:2f Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-147081 Clientid:01:52:54:00:ac:53:2f}
	I1217 09:23:33.493216  926762 main.go:143] libmachine: domain test-preload-147081 has defined IP address 192.168.39.188 and MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:33.493354  926762 provision.go:143] copyHostCerts
	I1217 09:23:33.493417  926762 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-893359/.minikube/ca.pem, removing ...
	I1217 09:23:33.493433  926762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-893359/.minikube/ca.pem
	I1217 09:23:33.493504  926762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-893359/.minikube/ca.pem (1078 bytes)
	I1217 09:23:33.493642  926762 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-893359/.minikube/cert.pem, removing ...
	I1217 09:23:33.493654  926762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-893359/.minikube/cert.pem
	I1217 09:23:33.493683  926762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-893359/.minikube/cert.pem (1123 bytes)
	I1217 09:23:33.493756  926762 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-893359/.minikube/key.pem, removing ...
	I1217 09:23:33.493764  926762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-893359/.minikube/key.pem
	I1217 09:23:33.493789  926762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-893359/.minikube/key.pem (1675 bytes)
	I1217 09:23:33.493858  926762 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-893359/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca-key.pem org=jenkins.test-preload-147081 san=[127.0.0.1 192.168.39.188 localhost minikube test-preload-147081]
	I1217 09:23:33.599670  926762 provision.go:177] copyRemoteCerts
	I1217 09:23:33.599726  926762 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 09:23:33.602497  926762 main.go:143] libmachine: domain test-preload-147081 has defined MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:33.602982  926762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:53:2f", ip: ""} in network mk-test-preload-147081: {Iface:virbr1 ExpiryTime:2025-12-17 10:23:31 +0000 UTC Type:0 Mac:52:54:00:ac:53:2f Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-147081 Clientid:01:52:54:00:ac:53:2f}
	I1217 09:23:33.603015  926762 main.go:143] libmachine: domain test-preload-147081 has defined IP address 192.168.39.188 and MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:33.603193  926762 sshutil.go:56] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/test-preload-147081/id_rsa Username:docker}
	I1217 09:23:33.689352  926762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 09:23:33.718223  926762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1217 09:23:33.746009  926762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 09:23:33.773184  926762 provision.go:87] duration metric: took 285.869124ms to configureAuth
	I1217 09:23:33.773222  926762 buildroot.go:189] setting minikube options for container-runtime
	I1217 09:23:33.773407  926762 config.go:182] Loaded profile config "test-preload-147081": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 09:23:33.776117  926762 main.go:143] libmachine: domain test-preload-147081 has defined MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:33.776474  926762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:53:2f", ip: ""} in network mk-test-preload-147081: {Iface:virbr1 ExpiryTime:2025-12-17 10:23:31 +0000 UTC Type:0 Mac:52:54:00:ac:53:2f Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-147081 Clientid:01:52:54:00:ac:53:2f}
	I1217 09:23:33.776501  926762 main.go:143] libmachine: domain test-preload-147081 has defined IP address 192.168.39.188 and MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:33.776650  926762 main.go:143] libmachine: Using SSH client type: native
	I1217 09:23:33.776732  926762 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1217 09:23:33.776752  926762 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 09:23:34.013001  926762 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 09:23:34.013030  926762 machine.go:97] duration metric: took 934.674659ms to provisionDockerMachine
	I1217 09:23:34.013046  926762 start.go:293] postStartSetup for "test-preload-147081" (driver="kvm2")
	I1217 09:23:34.013058  926762 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 09:23:34.013155  926762 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 09:23:34.016047  926762 main.go:143] libmachine: domain test-preload-147081 has defined MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:34.016418  926762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:53:2f", ip: ""} in network mk-test-preload-147081: {Iface:virbr1 ExpiryTime:2025-12-17 10:23:31 +0000 UTC Type:0 Mac:52:54:00:ac:53:2f Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-147081 Clientid:01:52:54:00:ac:53:2f}
	I1217 09:23:34.016448  926762 main.go:143] libmachine: domain test-preload-147081 has defined IP address 192.168.39.188 and MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:34.016620  926762 sshutil.go:56] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/test-preload-147081/id_rsa Username:docker}
	I1217 09:23:34.111142  926762 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 09:23:34.117024  926762 info.go:137] Remote host: Buildroot 2025.02
	I1217 09:23:34.117051  926762 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-893359/.minikube/addons for local assets ...
	I1217 09:23:34.117121  926762 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-893359/.minikube/files for local assets ...
	I1217 09:23:34.117261  926762 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-893359/.minikube/files/etc/ssl/certs/8972772.pem -> 8972772.pem in /etc/ssl/certs
	I1217 09:23:34.117391  926762 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 09:23:34.136289  926762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/files/etc/ssl/certs/8972772.pem --> /etc/ssl/certs/8972772.pem (1708 bytes)
	I1217 09:23:34.172697  926762 start.go:296] duration metric: took 159.63594ms for postStartSetup
	I1217 09:23:34.172774  926762 fix.go:56] duration metric: took 14.706843175s for fixHost
	I1217 09:23:34.175800  926762 main.go:143] libmachine: domain test-preload-147081 has defined MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:34.176271  926762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:53:2f", ip: ""} in network mk-test-preload-147081: {Iface:virbr1 ExpiryTime:2025-12-17 10:23:31 +0000 UTC Type:0 Mac:52:54:00:ac:53:2f Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-147081 Clientid:01:52:54:00:ac:53:2f}
	I1217 09:23:34.176305  926762 main.go:143] libmachine: domain test-preload-147081 has defined IP address 192.168.39.188 and MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:34.176503  926762 main.go:143] libmachine: Using SSH client type: native
	I1217 09:23:34.176615  926762 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1217 09:23:34.176627  926762 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 09:23:34.289430  926762 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765963414.247865416
	
	I1217 09:23:34.289456  926762 fix.go:216] guest clock: 1765963414.247865416
	I1217 09:23:34.289478  926762 fix.go:229] Guest: 2025-12-17 09:23:34.247865416 +0000 UTC Remote: 2025-12-17 09:23:34.172784902 +0000 UTC m=+14.812543341 (delta=75.080514ms)
	I1217 09:23:34.289528  926762 fix.go:200] guest clock delta is within tolerance: 75.080514ms
	I1217 09:23:34.289536  926762 start.go:83] releasing machines lock for "test-preload-147081", held for 14.823620238s
	I1217 09:23:34.292398  926762 main.go:143] libmachine: domain test-preload-147081 has defined MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:34.292825  926762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:53:2f", ip: ""} in network mk-test-preload-147081: {Iface:virbr1 ExpiryTime:2025-12-17 10:23:31 +0000 UTC Type:0 Mac:52:54:00:ac:53:2f Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-147081 Clientid:01:52:54:00:ac:53:2f}
	I1217 09:23:34.292856  926762 main.go:143] libmachine: domain test-preload-147081 has defined IP address 192.168.39.188 and MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:34.293441  926762 ssh_runner.go:195] Run: cat /version.json
	I1217 09:23:34.293538  926762 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 09:23:34.296630  926762 main.go:143] libmachine: domain test-preload-147081 has defined MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:34.296701  926762 main.go:143] libmachine: domain test-preload-147081 has defined MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:34.297074  926762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:53:2f", ip: ""} in network mk-test-preload-147081: {Iface:virbr1 ExpiryTime:2025-12-17 10:23:31 +0000 UTC Type:0 Mac:52:54:00:ac:53:2f Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-147081 Clientid:01:52:54:00:ac:53:2f}
	I1217 09:23:34.297091  926762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:53:2f", ip: ""} in network mk-test-preload-147081: {Iface:virbr1 ExpiryTime:2025-12-17 10:23:31 +0000 UTC Type:0 Mac:52:54:00:ac:53:2f Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-147081 Clientid:01:52:54:00:ac:53:2f}
	I1217 09:23:34.297104  926762 main.go:143] libmachine: domain test-preload-147081 has defined IP address 192.168.39.188 and MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:34.297109  926762 main.go:143] libmachine: domain test-preload-147081 has defined IP address 192.168.39.188 and MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:34.297287  926762 sshutil.go:56] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/test-preload-147081/id_rsa Username:docker}
	I1217 09:23:34.297302  926762 sshutil.go:56] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/test-preload-147081/id_rsa Username:docker}
	I1217 09:23:34.403464  926762 ssh_runner.go:195] Run: systemctl --version
	I1217 09:23:34.409393  926762 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 09:23:34.557372  926762 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 09:23:34.564785  926762 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 09:23:34.564846  926762 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 09:23:34.583855  926762 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 09:23:34.583880  926762 start.go:496] detecting cgroup driver to use...
	I1217 09:23:34.583957  926762 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 09:23:34.602575  926762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 09:23:34.618447  926762 docker.go:218] disabling cri-docker service (if available) ...
	I1217 09:23:34.618540  926762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 09:23:34.634461  926762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 09:23:34.649998  926762 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 09:23:34.791553  926762 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 09:23:35.001330  926762 docker.go:234] disabling docker service ...
	I1217 09:23:35.001395  926762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 09:23:35.017654  926762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 09:23:35.031941  926762 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 09:23:35.181271  926762 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 09:23:35.320174  926762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 09:23:35.335954  926762 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 09:23:35.358928  926762 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 09:23:35.359001  926762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:23:35.371540  926762 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 09:23:35.371627  926762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:23:35.383859  926762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:23:35.396075  926762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:23:35.408895  926762 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 09:23:35.421771  926762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:23:35.434065  926762 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:23:35.454103  926762 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:23:35.466337  926762 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 09:23:35.476824  926762 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1217 09:23:35.476880  926762 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1217 09:23:35.496351  926762 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 09:23:35.507643  926762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 09:23:35.647615  926762 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 09:23:35.766045  926762 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 09:23:35.766121  926762 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 09:23:35.771348  926762 start.go:564] Will wait 60s for crictl version
	I1217 09:23:35.771413  926762 ssh_runner.go:195] Run: which crictl
	I1217 09:23:35.776006  926762 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 09:23:35.812113  926762 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 09:23:35.812203  926762 ssh_runner.go:195] Run: crio --version
	I1217 09:23:35.840784  926762 ssh_runner.go:195] Run: crio --version
	I1217 09:23:35.873218  926762 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1217 09:23:35.877456  926762 main.go:143] libmachine: domain test-preload-147081 has defined MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:35.877910  926762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:53:2f", ip: ""} in network mk-test-preload-147081: {Iface:virbr1 ExpiryTime:2025-12-17 10:23:31 +0000 UTC Type:0 Mac:52:54:00:ac:53:2f Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-147081 Clientid:01:52:54:00:ac:53:2f}
	I1217 09:23:35.877954  926762 main.go:143] libmachine: domain test-preload-147081 has defined IP address 192.168.39.188 and MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:35.878183  926762 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1217 09:23:35.882722  926762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 09:23:35.897406  926762 kubeadm.go:884] updating cluster {Name:test-preload-147081 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.3 ClusterName:test-preload-147081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 09:23:35.897570  926762 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 09:23:35.897614  926762 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 09:23:35.930460  926762 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1217 09:23:35.930549  926762 ssh_runner.go:195] Run: which lz4
	I1217 09:23:35.934936  926762 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1217 09:23:35.939755  926762 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1217 09:23:35.939787  926762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340314847 bytes)
	I1217 09:23:37.179239  926762 crio.go:462] duration metric: took 1.244339193s to copy over tarball
	I1217 09:23:37.179426  926762 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1217 09:23:38.613210  926762 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.433743477s)
	I1217 09:23:38.613244  926762 crio.go:469] duration metric: took 1.433934329s to extract the tarball
	I1217 09:23:38.613254  926762 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1217 09:23:38.649937  926762 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 09:23:38.687416  926762 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 09:23:38.687454  926762 cache_images.go:86] Images are preloaded, skipping loading
	I1217 09:23:38.687465  926762 kubeadm.go:935] updating node { 192.168.39.188 8443 v1.34.3 crio true true} ...
	I1217 09:23:38.687599  926762 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-147081 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.188
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:test-preload-147081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 09:23:38.687692  926762 ssh_runner.go:195] Run: crio config
	I1217 09:23:38.733046  926762 cni.go:84] Creating CNI manager for ""
	I1217 09:23:38.733073  926762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 09:23:38.733092  926762 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 09:23:38.733113  926762 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.188 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-147081 NodeName:test-preload-147081 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.188"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.188 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 09:23:38.733253  926762 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.188
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-147081"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.188"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.188"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 09:23:38.733319  926762 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 09:23:38.745981  926762 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 09:23:38.746058  926762 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 09:23:38.758097  926762 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1217 09:23:38.779768  926762 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 09:23:38.801118  926762 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1217 09:23:38.822808  926762 ssh_runner.go:195] Run: grep 192.168.39.188	control-plane.minikube.internal$ /etc/hosts
	I1217 09:23:38.827114  926762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.188	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 09:23:38.842280  926762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 09:23:38.982557  926762 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 09:23:39.005135  926762 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/test-preload-147081 for IP: 192.168.39.188
	I1217 09:23:39.005159  926762 certs.go:195] generating shared ca certs ...
	I1217 09:23:39.005178  926762 certs.go:227] acquiring lock for ca certs: {Name:mk9975fd3c0c6324a63f90fa6e20c46f3034e6ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 09:23:39.005363  926762 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-893359/.minikube/ca.key
	I1217 09:23:39.005424  926762 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-893359/.minikube/proxy-client-ca.key
	I1217 09:23:39.005455  926762 certs.go:257] generating profile certs ...
	I1217 09:23:39.005625  926762 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/test-preload-147081/client.key
	I1217 09:23:39.005720  926762 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/test-preload-147081/apiserver.key.8aebe685
	I1217 09:23:39.005800  926762 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/test-preload-147081/proxy-client.key
	I1217 09:23:39.005975  926762 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/897277.pem (1338 bytes)
	W1217 09:23:39.006024  926762 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-893359/.minikube/certs/897277_empty.pem, impossibly tiny 0 bytes
	I1217 09:23:39.006039  926762 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 09:23:39.006077  926762 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem (1078 bytes)
	I1217 09:23:39.006111  926762 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/cert.pem (1123 bytes)
	I1217 09:23:39.006149  926762 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/key.pem (1675 bytes)
	I1217 09:23:39.006204  926762 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/files/etc/ssl/certs/8972772.pem (1708 bytes)
	I1217 09:23:39.007067  926762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 09:23:39.044688  926762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 09:23:39.073413  926762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 09:23:39.104028  926762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 09:23:39.133767  926762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/test-preload-147081/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1217 09:23:39.162962  926762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/test-preload-147081/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 09:23:39.191690  926762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/test-preload-147081/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 09:23:39.221719  926762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/test-preload-147081/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 09:23:39.250223  926762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/files/etc/ssl/certs/8972772.pem --> /usr/share/ca-certificates/8972772.pem (1708 bytes)
	I1217 09:23:39.278114  926762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 09:23:39.305298  926762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/certs/897277.pem --> /usr/share/ca-certificates/897277.pem (1338 bytes)
	I1217 09:23:39.332127  926762 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 09:23:39.351426  926762 ssh_runner.go:195] Run: openssl version
	I1217 09:23:39.357745  926762 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 09:23:39.368863  926762 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 09:23:39.380494  926762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 09:23:39.385634  926762 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 08:16 /usr/share/ca-certificates/minikubeCA.pem
	I1217 09:23:39.385702  926762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 09:23:39.392933  926762 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 09:23:39.404233  926762 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 09:23:39.415653  926762 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/897277.pem
	I1217 09:23:39.427347  926762 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/897277.pem /etc/ssl/certs/897277.pem
	I1217 09:23:39.438380  926762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/897277.pem
	I1217 09:23:39.443368  926762 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 08:35 /usr/share/ca-certificates/897277.pem
	I1217 09:23:39.443413  926762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/897277.pem
	I1217 09:23:39.450322  926762 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 09:23:39.461853  926762 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/897277.pem /etc/ssl/certs/51391683.0
	I1217 09:23:39.472766  926762 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8972772.pem
	I1217 09:23:39.483464  926762 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8972772.pem /etc/ssl/certs/8972772.pem
	I1217 09:23:39.494427  926762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8972772.pem
	I1217 09:23:39.499363  926762 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 08:35 /usr/share/ca-certificates/8972772.pem
	I1217 09:23:39.499405  926762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8972772.pem
	I1217 09:23:39.506198  926762 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 09:23:39.517089  926762 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8972772.pem /etc/ssl/certs/3ec20f2e.0
	I1217 09:23:39.527947  926762 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 09:23:39.532957  926762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 09:23:39.539885  926762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 09:23:39.546570  926762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 09:23:39.553472  926762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 09:23:39.560191  926762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 09:23:39.566883  926762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 09:23:39.573635  926762 kubeadm.go:401] StartCluster: {Name:test-preload-147081 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.3 ClusterName:test-preload-147081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 09:23:39.573712  926762 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 09:23:39.573780  926762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 09:23:39.607752  926762 cri.go:89] found id: ""
	I1217 09:23:39.607818  926762 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 09:23:39.619804  926762 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 09:23:39.619821  926762 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 09:23:39.619865  926762 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 09:23:39.631235  926762 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 09:23:39.631736  926762 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-147081" does not appear in /home/jenkins/minikube-integration/22182-893359/kubeconfig
	I1217 09:23:39.631858  926762 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-893359/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-147081" cluster setting kubeconfig missing "test-preload-147081" context setting]
	I1217 09:23:39.632120  926762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-893359/kubeconfig: {Name:mk96c1c47bbd55cd0ea3fb74224ea198e9d4fd5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 09:23:39.632676  926762 kapi.go:59] client config for test-preload-147081: &rest.Config{Host:"https://192.168.39.188:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-893359/.minikube/profiles/test-preload-147081/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-893359/.minikube/profiles/test-preload-147081/client.key", CAFile:"/home/jenkins/minikube-integration/22182-893359/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2818a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 09:23:39.633108  926762 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 09:23:39.633122  926762 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 09:23:39.633127  926762 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 09:23:39.633131  926762 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 09:23:39.633135  926762 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 09:23:39.633568  926762 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 09:23:39.644428  926762 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.188
	I1217 09:23:39.644460  926762 kubeadm.go:1161] stopping kube-system containers ...
	I1217 09:23:39.644474  926762 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1217 09:23:39.644528  926762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 09:23:39.677951  926762 cri.go:89] found id: ""
	I1217 09:23:39.678020  926762 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 09:23:39.700688  926762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 09:23:39.711936  926762 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 09:23:39.711953  926762 kubeadm.go:158] found existing configuration files:
	
	I1217 09:23:39.711990  926762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 09:23:39.722097  926762 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 09:23:39.722175  926762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 09:23:39.732797  926762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 09:23:39.742945  926762 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 09:23:39.743007  926762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 09:23:39.753707  926762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 09:23:39.764041  926762 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 09:23:39.764085  926762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 09:23:39.774830  926762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 09:23:39.784858  926762 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 09:23:39.784900  926762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 09:23:39.795570  926762 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 09:23:39.806342  926762 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 09:23:39.858364  926762 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 09:23:41.842825  926762 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.984415617s)
	I1217 09:23:41.842904  926762 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 09:23:42.087585  926762 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 09:23:42.158016  926762 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 09:23:42.248267  926762 api_server.go:52] waiting for apiserver process to appear ...
	I1217 09:23:42.248377  926762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 09:23:42.748621  926762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 09:23:43.248446  926762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 09:23:43.749469  926762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 09:23:44.248637  926762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 09:23:44.279361  926762 api_server.go:72] duration metric: took 2.031096076s to wait for apiserver process to appear ...
	I1217 09:23:44.279392  926762 api_server.go:88] waiting for apiserver healthz status ...
	I1217 09:23:44.279416  926762 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I1217 09:23:44.280100  926762 api_server.go:269] stopped: https://192.168.39.188:8443/healthz: Get "https://192.168.39.188:8443/healthz": dial tcp 192.168.39.188:8443: connect: connection refused
	I1217 09:23:44.779828  926762 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I1217 09:23:46.756554  926762 api_server.go:279] https://192.168.39.188:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 09:23:46.756604  926762 api_server.go:103] status: https://192.168.39.188:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 09:23:46.756627  926762 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I1217 09:23:46.858716  926762 api_server.go:279] https://192.168.39.188:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 09:23:46.858763  926762 api_server.go:103] status: https://192.168.39.188:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 09:23:46.858800  926762 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I1217 09:23:46.872803  926762 api_server.go:279] https://192.168.39.188:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 09:23:46.872833  926762 api_server.go:103] status: https://192.168.39.188:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 09:23:47.280496  926762 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I1217 09:23:47.285383  926762 api_server.go:279] https://192.168.39.188:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 09:23:47.285411  926762 api_server.go:103] status: https://192.168.39.188:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 09:23:47.780115  926762 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I1217 09:23:47.787489  926762 api_server.go:279] https://192.168.39.188:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 09:23:47.787534  926762 api_server.go:103] status: https://192.168.39.188:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 09:23:48.280256  926762 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I1217 09:23:48.286330  926762 api_server.go:279] https://192.168.39.188:8443/healthz returned 200:
	ok
	I1217 09:23:48.293050  926762 api_server.go:141] control plane version: v1.34.3
	I1217 09:23:48.293083  926762 api_server.go:131] duration metric: took 4.013680815s to wait for apiserver health ...
	I1217 09:23:48.293093  926762 cni.go:84] Creating CNI manager for ""
	I1217 09:23:48.293100  926762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 09:23:48.294841  926762 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1217 09:23:48.295910  926762 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1217 09:23:48.308426  926762 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1217 09:23:48.328831  926762 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 09:23:48.337807  926762 system_pods.go:59] 7 kube-system pods found
	I1217 09:23:48.337845  926762 system_pods.go:61] "coredns-66bc5c9577-qpwmc" [0079d78a-a139-433a-8877-8c077b9d21a6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 09:23:48.337856  926762 system_pods.go:61] "etcd-test-preload-147081" [be61b6db-6da2-4100-8309-b022ea01eccf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 09:23:48.337866  926762 system_pods.go:61] "kube-apiserver-test-preload-147081" [6b490fa5-5c86-401c-94b4-003fcbb1b2ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 09:23:48.337881  926762 system_pods.go:61] "kube-controller-manager-test-preload-147081" [a8769093-b134-46b0-8e42-1f4e74a12486] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 09:23:48.337891  926762 system_pods.go:61] "kube-proxy-pxpsd" [f76af27d-2483-4c40-a538-611f05087898] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 09:23:48.337903  926762 system_pods.go:61] "kube-scheduler-test-preload-147081" [391780a3-b023-4d91-841c-b3b200c6ff53] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 09:23:48.337916  926762 system_pods.go:61] "storage-provisioner" [395b53e0-b961-4db7-8e19-f60896249958] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 09:23:48.337930  926762 system_pods.go:74] duration metric: took 9.075196ms to wait for pod list to return data ...
	I1217 09:23:48.337942  926762 node_conditions.go:102] verifying NodePressure condition ...
	I1217 09:23:48.344405  926762 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 09:23:48.344430  926762 node_conditions.go:123] node cpu capacity is 2
	I1217 09:23:48.344447  926762 node_conditions.go:105] duration metric: took 6.498483ms to run NodePressure ...
	I1217 09:23:48.344518  926762 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 09:23:48.663305  926762 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1217 09:23:48.667033  926762 kubeadm.go:744] kubelet initialised
	I1217 09:23:48.667054  926762 kubeadm.go:745] duration metric: took 3.722514ms waiting for restarted kubelet to initialise ...
	I1217 09:23:48.667071  926762 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 09:23:48.683848  926762 ops.go:34] apiserver oom_adj: -16
	I1217 09:23:48.683870  926762 kubeadm.go:602] duration metric: took 9.064043647s to restartPrimaryControlPlane
	I1217 09:23:48.683879  926762 kubeadm.go:403] duration metric: took 9.11025165s to StartCluster
	I1217 09:23:48.683898  926762 settings.go:142] acquiring lock: {Name:mk00e9c64ab8ac6f70bd45684fd03a06bf70934d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 09:23:48.683977  926762 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-893359/kubeconfig
	I1217 09:23:48.684555  926762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-893359/kubeconfig: {Name:mk96c1c47bbd55cd0ea3fb74224ea198e9d4fd5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 09:23:48.684824  926762 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 09:23:48.684954  926762 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 09:23:48.685026  926762 config.go:182] Loaded profile config "test-preload-147081": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 09:23:48.685045  926762 addons.go:70] Setting storage-provisioner=true in profile "test-preload-147081"
	I1217 09:23:48.685064  926762 addons.go:239] Setting addon storage-provisioner=true in "test-preload-147081"
	W1217 09:23:48.685077  926762 addons.go:248] addon storage-provisioner should already be in state true
	I1217 09:23:48.685081  926762 addons.go:70] Setting default-storageclass=true in profile "test-preload-147081"
	I1217 09:23:48.685104  926762 host.go:66] Checking if "test-preload-147081" exists ...
	I1217 09:23:48.685129  926762 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-147081"
	I1217 09:23:48.687172  926762 out.go:179] * Verifying Kubernetes components...
	I1217 09:23:48.687716  926762 kapi.go:59] client config for test-preload-147081: &rest.Config{Host:"https://192.168.39.188:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-893359/.minikube/profiles/test-preload-147081/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-893359/.minikube/profiles/test-preload-147081/client.key", CAFile:"/home/jenkins/minikube-integration/22182-893359/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2818a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 09:23:48.687955  926762 addons.go:239] Setting addon default-storageclass=true in "test-preload-147081"
	W1217 09:23:48.687968  926762 addons.go:248] addon default-storageclass should already be in state true
	I1217 09:23:48.687985  926762 host.go:66] Checking if "test-preload-147081" exists ...
	I1217 09:23:48.688318  926762 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 09:23:48.688363  926762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 09:23:48.689320  926762 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 09:23:48.689336  926762 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 09:23:48.689436  926762 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 09:23:48.689452  926762 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 09:23:48.692344  926762 main.go:143] libmachine: domain test-preload-147081 has defined MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:48.692462  926762 main.go:143] libmachine: domain test-preload-147081 has defined MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:48.692889  926762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:53:2f", ip: ""} in network mk-test-preload-147081: {Iface:virbr1 ExpiryTime:2025-12-17 10:23:31 +0000 UTC Type:0 Mac:52:54:00:ac:53:2f Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-147081 Clientid:01:52:54:00:ac:53:2f}
	I1217 09:23:48.692914  926762 main.go:143] libmachine: domain test-preload-147081 has defined IP address 192.168.39.188 and MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:48.692892  926762 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:53:2f", ip: ""} in network mk-test-preload-147081: {Iface:virbr1 ExpiryTime:2025-12-17 10:23:31 +0000 UTC Type:0 Mac:52:54:00:ac:53:2f Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-147081 Clientid:01:52:54:00:ac:53:2f}
	I1217 09:23:48.693057  926762 main.go:143] libmachine: domain test-preload-147081 has defined IP address 192.168.39.188 and MAC address 52:54:00:ac:53:2f in network mk-test-preload-147081
	I1217 09:23:48.693077  926762 sshutil.go:56] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/test-preload-147081/id_rsa Username:docker}
	I1217 09:23:48.693323  926762 sshutil.go:56] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/test-preload-147081/id_rsa Username:docker}
	I1217 09:23:48.888356  926762 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 09:23:48.909270  926762 node_ready.go:35] waiting up to 6m0s for node "test-preload-147081" to be "Ready" ...
	I1217 09:23:48.914011  926762 node_ready.go:49] node "test-preload-147081" is "Ready"
	I1217 09:23:48.914039  926762 node_ready.go:38] duration metric: took 4.708869ms for node "test-preload-147081" to be "Ready" ...
	I1217 09:23:48.914054  926762 api_server.go:52] waiting for apiserver process to appear ...
	I1217 09:23:48.914119  926762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 09:23:48.939958  926762 api_server.go:72] duration metric: took 255.095099ms to wait for apiserver process to appear ...
	I1217 09:23:48.939984  926762 api_server.go:88] waiting for apiserver healthz status ...
	I1217 09:23:48.940002  926762 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I1217 09:23:48.947010  926762 api_server.go:279] https://192.168.39.188:8443/healthz returned 200:
	ok
	I1217 09:23:48.947895  926762 api_server.go:141] control plane version: v1.34.3
	I1217 09:23:48.947933  926762 api_server.go:131] duration metric: took 7.941587ms to wait for apiserver health ...
	I1217 09:23:48.947944  926762 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 09:23:48.952643  926762 system_pods.go:59] 7 kube-system pods found
	I1217 09:23:48.952681  926762 system_pods.go:61] "coredns-66bc5c9577-qpwmc" [0079d78a-a139-433a-8877-8c077b9d21a6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 09:23:48.952693  926762 system_pods.go:61] "etcd-test-preload-147081" [be61b6db-6da2-4100-8309-b022ea01eccf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 09:23:48.952705  926762 system_pods.go:61] "kube-apiserver-test-preload-147081" [6b490fa5-5c86-401c-94b4-003fcbb1b2ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 09:23:48.952714  926762 system_pods.go:61] "kube-controller-manager-test-preload-147081" [a8769093-b134-46b0-8e42-1f4e74a12486] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 09:23:48.952722  926762 system_pods.go:61] "kube-proxy-pxpsd" [f76af27d-2483-4c40-a538-611f05087898] Running
	I1217 09:23:48.952731  926762 system_pods.go:61] "kube-scheduler-test-preload-147081" [391780a3-b023-4d91-841c-b3b200c6ff53] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 09:23:48.952743  926762 system_pods.go:61] "storage-provisioner" [395b53e0-b961-4db7-8e19-f60896249958] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 09:23:48.952752  926762 system_pods.go:74] duration metric: took 4.799526ms to wait for pod list to return data ...
	I1217 09:23:48.952766  926762 default_sa.go:34] waiting for default service account to be created ...
	I1217 09:23:48.955061  926762 default_sa.go:45] found service account: "default"
	I1217 09:23:48.955077  926762 default_sa.go:55] duration metric: took 2.303531ms for default service account to be created ...
	I1217 09:23:48.955083  926762 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 09:23:48.959139  926762 system_pods.go:86] 7 kube-system pods found
	I1217 09:23:48.959177  926762 system_pods.go:89] "coredns-66bc5c9577-qpwmc" [0079d78a-a139-433a-8877-8c077b9d21a6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 09:23:48.959189  926762 system_pods.go:89] "etcd-test-preload-147081" [be61b6db-6da2-4100-8309-b022ea01eccf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 09:23:48.959200  926762 system_pods.go:89] "kube-apiserver-test-preload-147081" [6b490fa5-5c86-401c-94b4-003fcbb1b2ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 09:23:48.959209  926762 system_pods.go:89] "kube-controller-manager-test-preload-147081" [a8769093-b134-46b0-8e42-1f4e74a12486] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 09:23:48.959218  926762 system_pods.go:89] "kube-proxy-pxpsd" [f76af27d-2483-4c40-a538-611f05087898] Running
	I1217 09:23:48.959225  926762 system_pods.go:89] "kube-scheduler-test-preload-147081" [391780a3-b023-4d91-841c-b3b200c6ff53] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 09:23:48.959239  926762 system_pods.go:89] "storage-provisioner" [395b53e0-b961-4db7-8e19-f60896249958] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 09:23:48.959248  926762 system_pods.go:126] duration metric: took 4.159109ms to wait for k8s-apps to be running ...
	I1217 09:23:48.959262  926762 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 09:23:48.959324  926762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 09:23:48.978673  926762 system_svc.go:56] duration metric: took 19.402885ms WaitForService to wait for kubelet
	I1217 09:23:48.978705  926762 kubeadm.go:587] duration metric: took 293.847151ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 09:23:48.978726  926762 node_conditions.go:102] verifying NodePressure condition ...
	I1217 09:23:48.983238  926762 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 09:23:48.983264  926762 node_conditions.go:123] node cpu capacity is 2
	I1217 09:23:48.983279  926762 node_conditions.go:105] duration metric: took 4.546282ms to run NodePressure ...
	I1217 09:23:48.983297  926762 start.go:242] waiting for startup goroutines ...
	I1217 09:23:49.082075  926762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 09:23:49.084179  926762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 09:23:49.723426  926762 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1217 09:23:49.725105  926762 addons.go:530] duration metric: took 1.040154797s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1217 09:23:49.725155  926762 start.go:247] waiting for cluster config update ...
	I1217 09:23:49.725175  926762 start.go:256] writing updated cluster config ...
	I1217 09:23:49.725479  926762 ssh_runner.go:195] Run: rm -f paused
	I1217 09:23:49.730918  926762 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 09:23:49.731382  926762 kapi.go:59] client config for test-preload-147081: &rest.Config{Host:"https://192.168.39.188:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-893359/.minikube/profiles/test-preload-147081/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-893359/.minikube/profiles/test-preload-147081/client.key", CAFile:"/home/jenkins/minikube-integration/22182-893359/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2818a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 09:23:49.734926  926762 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qpwmc" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 09:23:51.740856  926762 pod_ready.go:104] pod "coredns-66bc5c9577-qpwmc" is not "Ready", error: <nil>
	W1217 09:23:54.241044  926762 pod_ready.go:104] pod "coredns-66bc5c9577-qpwmc" is not "Ready", error: <nil>
	W1217 09:23:56.741410  926762 pod_ready.go:104] pod "coredns-66bc5c9577-qpwmc" is not "Ready", error: <nil>
	W1217 09:23:58.742416  926762 pod_ready.go:104] pod "coredns-66bc5c9577-qpwmc" is not "Ready", error: <nil>
	I1217 09:23:59.740713  926762 pod_ready.go:94] pod "coredns-66bc5c9577-qpwmc" is "Ready"
	I1217 09:23:59.740741  926762 pod_ready.go:86] duration metric: took 10.005797583s for pod "coredns-66bc5c9577-qpwmc" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 09:23:59.743336  926762 pod_ready.go:83] waiting for pod "etcd-test-preload-147081" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 09:23:59.747449  926762 pod_ready.go:94] pod "etcd-test-preload-147081" is "Ready"
	I1217 09:23:59.747468  926762 pod_ready.go:86] duration metric: took 4.109059ms for pod "etcd-test-preload-147081" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 09:23:59.749410  926762 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-147081" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 09:23:59.753138  926762 pod_ready.go:94] pod "kube-apiserver-test-preload-147081" is "Ready"
	I1217 09:23:59.753160  926762 pod_ready.go:86] duration metric: took 3.727684ms for pod "kube-apiserver-test-preload-147081" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 09:23:59.755593  926762 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-147081" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 09:23:59.939401  926762 pod_ready.go:94] pod "kube-controller-manager-test-preload-147081" is "Ready"
	I1217 09:23:59.939437  926762 pod_ready.go:86] duration metric: took 183.825484ms for pod "kube-controller-manager-test-preload-147081" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 09:24:00.138791  926762 pod_ready.go:83] waiting for pod "kube-proxy-pxpsd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 09:24:00.538978  926762 pod_ready.go:94] pod "kube-proxy-pxpsd" is "Ready"
	I1217 09:24:00.539006  926762 pod_ready.go:86] duration metric: took 400.18848ms for pod "kube-proxy-pxpsd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 09:24:00.738408  926762 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-147081" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 09:24:01.139207  926762 pod_ready.go:94] pod "kube-scheduler-test-preload-147081" is "Ready"
	I1217 09:24:01.139238  926762 pod_ready.go:86] duration metric: took 400.802509ms for pod "kube-scheduler-test-preload-147081" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 09:24:01.139253  926762 pod_ready.go:40] duration metric: took 11.408306963s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 09:24:01.184522  926762 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 09:24:01.186237  926762 out.go:179] * Done! kubectl is now configured to use "test-preload-147081" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 09:24:01 test-preload-147081 crio[834]: time="2025-12-17 09:24:01.968846308Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765963441968762030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135813,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54b6eb24-688b-46ac-95ae-03b3586fcbb9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 09:24:01 test-preload-147081 crio[834]: time="2025-12-17 09:24:01.970473771Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6922cbc-42db-4f61-85c2-a2d51da9b383 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 09:24:01 test-preload-147081 crio[834]: time="2025-12-17 09:24:01.970590933Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6922cbc-42db-4f61-85c2-a2d51da9b383 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 09:24:01 test-preload-147081 crio[834]: time="2025-12-17 09:24:01.971368168Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca1b686d9114f1a468daea74b3045b37639279d07321e2953c1679736d7522bf,PodSandboxId:711d68b54b7c8376e8f89b019ddb7820f526387d51e2ded7dd8ac1ebd1b35447,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765963430959201904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qpwmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0079d78a-a139-433a-8877-8c077b9d21a6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:841553aff06c77b4e6833dfa2f432932eac9be38589764dbdef2921857f43ddc,PodSandboxId:7b6245a7c8b1399879c524b35325b4ab72708e7705f35eaa4758da09d7c15b97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765963428337008047,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395b53e0-b961-4db7-8e19-f60896249958,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86d1182200b08151792cf3b8d825055a4ca9f297718a382500b97e1d4373d2fc,PodSandboxId:7b6245a7c8b1399879c524b35325b4ab72708e7705f35eaa4758da09d7c15b97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765963427582598184,Labels:map[string
]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395b53e0-b961-4db7-8e19-f60896249958,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:872d8488489a9eeefb60dfec8dee1689555de4de6d8f0d1c29f10626add4d2f9,PodSandboxId:9b0e39c8871a24ada75833bbd09502bc1d8bd66e5d6a2c14ca335614df77ac25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765963427576227478,Labels:map[string]string{io.kubernetes
.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pxpsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f76af27d-2483-4c40-a538-611f05087898,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f206f49b1e2174184bc5769a23b7892ae696ea32b8358b6ef64a17b809456ad0,PodSandboxId:beb367958fedc6cb455b791a388206ca5b78086479455b8af3c1eb6cebd900c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765963423913448138,Labels:map[string]string{io.kubernetes.container.name: kube-schedul
er,io.kubernetes.pod.name: kube-scheduler-test-preload-147081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4286431f96f65a8804eb3aac51b28a8,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4e724528a30518fbf7d339a7b3456b04ad2733a60bc18ec53948880078fce3,PodSandboxId:4e3da01403167ca57364b2e4cb5232362ddceb2bfbb39b405c156a80a9b2ae7c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUN
NING,CreatedAt:1765963423896859720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-147081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fa1f3b41c71cf56514edef8c81fb17a,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2bbccfeb7ad8541c22326eb80ece42caface6cc679e8ee24ce73b0a689ec5b,PodSandboxId:926711a6035674f9ceb63364bc3bb73c363910a9c5902dbaedd9a8f9b73ce5ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765963423924217359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-147081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e082962a7ec0df16d2d1319fa4d6cce,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf933fcdf259fa46aa3d829e680395481888d0a34e4e8886cc4f1c726c84531d,PodSandboxId:8310d99a447a3896938893e02349cad359035556b6a754f677fbc7ab53c4e13e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d2
36c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765963423892274962,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-147081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fe302673e73b993d7bd450c3b9092bf,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6922cbc-42db-4f61-85c2-a2d51da9b383 name=/runtime.v1.RuntimeService/ListContain
ers
	Dec 17 09:24:02 test-preload-147081 crio[834]: time="2025-12-17 09:24:02.008082172Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d766c34f-63a2-4067-881a-612851909d58 name=/runtime.v1.RuntimeService/Version
	Dec 17 09:24:02 test-preload-147081 crio[834]: time="2025-12-17 09:24:02.008180201Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d766c34f-63a2-4067-881a-612851909d58 name=/runtime.v1.RuntimeService/Version
	Dec 17 09:24:02 test-preload-147081 crio[834]: time="2025-12-17 09:24:02.009457950Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=07527f49-7ec4-4808-9e44-6dd6cdccd61e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 09:24:02 test-preload-147081 crio[834]: time="2025-12-17 09:24:02.009965774Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765963442009937779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135813,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=07527f49-7ec4-4808-9e44-6dd6cdccd61e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 09:24:02 test-preload-147081 crio[834]: time="2025-12-17 09:24:02.010916421Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a7a0b7b-3ce7-4f99-a0d7-e9cb19890010 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 09:24:02 test-preload-147081 crio[834]: time="2025-12-17 09:24:02.010987545Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a7a0b7b-3ce7-4f99-a0d7-e9cb19890010 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 09:24:02 test-preload-147081 crio[834]: time="2025-12-17 09:24:02.011195115Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca1b686d9114f1a468daea74b3045b37639279d07321e2953c1679736d7522bf,PodSandboxId:711d68b54b7c8376e8f89b019ddb7820f526387d51e2ded7dd8ac1ebd1b35447,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765963430959201904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qpwmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0079d78a-a139-433a-8877-8c077b9d21a6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:841553aff06c77b4e6833dfa2f432932eac9be38589764dbdef2921857f43ddc,PodSandboxId:7b6245a7c8b1399879c524b35325b4ab72708e7705f35eaa4758da09d7c15b97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765963428337008047,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395b53e0-b961-4db7-8e19-f60896249958,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86d1182200b08151792cf3b8d825055a4ca9f297718a382500b97e1d4373d2fc,PodSandboxId:7b6245a7c8b1399879c524b35325b4ab72708e7705f35eaa4758da09d7c15b97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765963427582598184,Labels:map[string
]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395b53e0-b961-4db7-8e19-f60896249958,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:872d8488489a9eeefb60dfec8dee1689555de4de6d8f0d1c29f10626add4d2f9,PodSandboxId:9b0e39c8871a24ada75833bbd09502bc1d8bd66e5d6a2c14ca335614df77ac25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765963427576227478,Labels:map[string]string{io.kubernetes
.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pxpsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f76af27d-2483-4c40-a538-611f05087898,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f206f49b1e2174184bc5769a23b7892ae696ea32b8358b6ef64a17b809456ad0,PodSandboxId:beb367958fedc6cb455b791a388206ca5b78086479455b8af3c1eb6cebd900c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765963423913448138,Labels:map[string]string{io.kubernetes.container.name: kube-schedul
er,io.kubernetes.pod.name: kube-scheduler-test-preload-147081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4286431f96f65a8804eb3aac51b28a8,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4e724528a30518fbf7d339a7b3456b04ad2733a60bc18ec53948880078fce3,PodSandboxId:4e3da01403167ca57364b2e4cb5232362ddceb2bfbb39b405c156a80a9b2ae7c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUN
NING,CreatedAt:1765963423896859720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-147081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fa1f3b41c71cf56514edef8c81fb17a,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2bbccfeb7ad8541c22326eb80ece42caface6cc679e8ee24ce73b0a689ec5b,PodSandboxId:926711a6035674f9ceb63364bc3bb73c363910a9c5902dbaedd9a8f9b73ce5ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765963423924217359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-147081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e082962a7ec0df16d2d1319fa4d6cce,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf933fcdf259fa46aa3d829e680395481888d0a34e4e8886cc4f1c726c84531d,PodSandboxId:8310d99a447a3896938893e02349cad359035556b6a754f677fbc7ab53c4e13e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d2
36c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765963423892274962,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-147081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fe302673e73b993d7bd450c3b9092bf,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a7a0b7b-3ce7-4f99-a0d7-e9cb19890010 name=/runtime.v1.RuntimeService/ListContain
ers
	Dec 17 09:24:02 test-preload-147081 crio[834]: time="2025-12-17 09:24:02.045821155Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8105b5d8-561f-4da7-9dfc-4a3237164f21 name=/runtime.v1.RuntimeService/Version
	Dec 17 09:24:02 test-preload-147081 crio[834]: time="2025-12-17 09:24:02.046201095Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8105b5d8-561f-4da7-9dfc-4a3237164f21 name=/runtime.v1.RuntimeService/Version
	Dec 17 09:24:02 test-preload-147081 crio[834]: time="2025-12-17 09:24:02.048009142Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=edb88bdf-b548-447b-bd12-998b280daa7a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 09:24:02 test-preload-147081 crio[834]: time="2025-12-17 09:24:02.048395518Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765963442048377174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135813,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=edb88bdf-b548-447b-bd12-998b280daa7a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 09:24:02 test-preload-147081 crio[834]: time="2025-12-17 09:24:02.049325606Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8b6a8f8-a673-4a0d-882f-f3c4617a740c name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 09:24:02 test-preload-147081 crio[834]: time="2025-12-17 09:24:02.049388313Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8b6a8f8-a673-4a0d-882f-f3c4617a740c name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 09:24:02 test-preload-147081 crio[834]: time="2025-12-17 09:24:02.049540176Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca1b686d9114f1a468daea74b3045b37639279d07321e2953c1679736d7522bf,PodSandboxId:711d68b54b7c8376e8f89b019ddb7820f526387d51e2ded7dd8ac1ebd1b35447,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765963430959201904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qpwmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0079d78a-a139-433a-8877-8c077b9d21a6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:841553aff06c77b4e6833dfa2f432932eac9be38589764dbdef2921857f43ddc,PodSandboxId:7b6245a7c8b1399879c524b35325b4ab72708e7705f35eaa4758da09d7c15b97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765963428337008047,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395b53e0-b961-4db7-8e19-f60896249958,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86d1182200b08151792cf3b8d825055a4ca9f297718a382500b97e1d4373d2fc,PodSandboxId:7b6245a7c8b1399879c524b35325b4ab72708e7705f35eaa4758da09d7c15b97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765963427582598184,Labels:map[string
]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395b53e0-b961-4db7-8e19-f60896249958,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:872d8488489a9eeefb60dfec8dee1689555de4de6d8f0d1c29f10626add4d2f9,PodSandboxId:9b0e39c8871a24ada75833bbd09502bc1d8bd66e5d6a2c14ca335614df77ac25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765963427576227478,Labels:map[string]string{io.kubernetes
.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pxpsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f76af27d-2483-4c40-a538-611f05087898,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f206f49b1e2174184bc5769a23b7892ae696ea32b8358b6ef64a17b809456ad0,PodSandboxId:beb367958fedc6cb455b791a388206ca5b78086479455b8af3c1eb6cebd900c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765963423913448138,Labels:map[string]string{io.kubernetes.container.name: kube-schedul
er,io.kubernetes.pod.name: kube-scheduler-test-preload-147081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4286431f96f65a8804eb3aac51b28a8,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4e724528a30518fbf7d339a7b3456b04ad2733a60bc18ec53948880078fce3,PodSandboxId:4e3da01403167ca57364b2e4cb5232362ddceb2bfbb39b405c156a80a9b2ae7c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUN
NING,CreatedAt:1765963423896859720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-147081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fa1f3b41c71cf56514edef8c81fb17a,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2bbccfeb7ad8541c22326eb80ece42caface6cc679e8ee24ce73b0a689ec5b,PodSandboxId:926711a6035674f9ceb63364bc3bb73c363910a9c5902dbaedd9a8f9b73ce5ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765963423924217359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-147081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e082962a7ec0df16d2d1319fa4d6cce,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf933fcdf259fa46aa3d829e680395481888d0a34e4e8886cc4f1c726c84531d,PodSandboxId:8310d99a447a3896938893e02349cad359035556b6a754f677fbc7ab53c4e13e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d2
36c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765963423892274962,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-147081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fe302673e73b993d7bd450c3b9092bf,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f8b6a8f8-a673-4a0d-882f-f3c4617a740c name=/runtime.v1.RuntimeService/ListContain
ers
	Dec 17 09:24:02 test-preload-147081 crio[834]: time="2025-12-17 09:24:02.081475164Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba562764-dec2-4c71-b12a-1d74cb98f448 name=/runtime.v1.RuntimeService/Version
	Dec 17 09:24:02 test-preload-147081 crio[834]: time="2025-12-17 09:24:02.081566118Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba562764-dec2-4c71-b12a-1d74cb98f448 name=/runtime.v1.RuntimeService/Version
	Dec 17 09:24:02 test-preload-147081 crio[834]: time="2025-12-17 09:24:02.083540217Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=426aef13-7859-4209-a0b4-7b4a1108f3c2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 09:24:02 test-preload-147081 crio[834]: time="2025-12-17 09:24:02.084390739Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765963442084315315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135813,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=426aef13-7859-4209-a0b4-7b4a1108f3c2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 09:24:02 test-preload-147081 crio[834]: time="2025-12-17 09:24:02.085555726Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d028df41-614b-4cf4-a86f-10af87b761fb name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 09:24:02 test-preload-147081 crio[834]: time="2025-12-17 09:24:02.085609391Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d028df41-614b-4cf4-a86f-10af87b761fb name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 09:24:02 test-preload-147081 crio[834]: time="2025-12-17 09:24:02.085855101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca1b686d9114f1a468daea74b3045b37639279d07321e2953c1679736d7522bf,PodSandboxId:711d68b54b7c8376e8f89b019ddb7820f526387d51e2ded7dd8ac1ebd1b35447,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765963430959201904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qpwmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0079d78a-a139-433a-8877-8c077b9d21a6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:841553aff06c77b4e6833dfa2f432932eac9be38589764dbdef2921857f43ddc,PodSandboxId:7b6245a7c8b1399879c524b35325b4ab72708e7705f35eaa4758da09d7c15b97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765963428337008047,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395b53e0-b961-4db7-8e19-f60896249958,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86d1182200b08151792cf3b8d825055a4ca9f297718a382500b97e1d4373d2fc,PodSandboxId:7b6245a7c8b1399879c524b35325b4ab72708e7705f35eaa4758da09d7c15b97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765963427582598184,Labels:map[string
]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395b53e0-b961-4db7-8e19-f60896249958,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:872d8488489a9eeefb60dfec8dee1689555de4de6d8f0d1c29f10626add4d2f9,PodSandboxId:9b0e39c8871a24ada75833bbd09502bc1d8bd66e5d6a2c14ca335614df77ac25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765963427576227478,Labels:map[string]string{io.kubernetes
.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pxpsd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f76af27d-2483-4c40-a538-611f05087898,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f206f49b1e2174184bc5769a23b7892ae696ea32b8358b6ef64a17b809456ad0,PodSandboxId:beb367958fedc6cb455b791a388206ca5b78086479455b8af3c1eb6cebd900c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765963423913448138,Labels:map[string]string{io.kubernetes.container.name: kube-schedul
er,io.kubernetes.pod.name: kube-scheduler-test-preload-147081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4286431f96f65a8804eb3aac51b28a8,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4e724528a30518fbf7d339a7b3456b04ad2733a60bc18ec53948880078fce3,PodSandboxId:4e3da01403167ca57364b2e4cb5232362ddceb2bfbb39b405c156a80a9b2ae7c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUN
NING,CreatedAt:1765963423896859720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-147081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fa1f3b41c71cf56514edef8c81fb17a,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2bbccfeb7ad8541c22326eb80ece42caface6cc679e8ee24ce73b0a689ec5b,PodSandboxId:926711a6035674f9ceb63364bc3bb73c363910a9c5902dbaedd9a8f9b73ce5ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765963423924217359,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-147081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e082962a7ec0df16d2d1319fa4d6cce,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf933fcdf259fa46aa3d829e680395481888d0a34e4e8886cc4f1c726c84531d,PodSandboxId:8310d99a447a3896938893e02349cad359035556b6a754f677fbc7ab53c4e13e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d2
36c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765963423892274962,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-147081,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fe302673e73b993d7bd450c3b9092bf,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d028df41-614b-4cf4-a86f-10af87b761fb name=/runtime.v1.RuntimeService/ListContain
ers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	ca1b686d9114f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago      Running             coredns                   1                   711d68b54b7c8       coredns-66bc5c9577-qpwmc                      kube-system
	841553aff06c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       3                   7b6245a7c8b13       storage-provisioner                           kube-system
	86d1182200b08       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Exited              storage-provisioner       2                   7b6245a7c8b13       storage-provisioner                           kube-system
	872d8488489a9       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691   14 seconds ago      Running             kube-proxy                1                   9b0e39c8871a2       kube-proxy-pxpsd                              kube-system
	cd2bbccfeb7ad       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c   18 seconds ago      Running             kube-apiserver            1                   926711a603567       kube-apiserver-test-preload-147081            kube-system
	f206f49b1e217       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78   18 seconds ago      Running             kube-scheduler            1                   beb367958fedc       kube-scheduler-test-preload-147081            kube-system
	ad4e724528a30       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   18 seconds ago      Running             etcd                      1                   4e3da01403167       etcd-test-preload-147081                      kube-system
	bf933fcdf259f       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942   18 seconds ago      Running             kube-controller-manager   1                   8310d99a447a3       kube-controller-manager-test-preload-147081   kube-system
	
	
	==> coredns [ca1b686d9114f1a468daea74b3045b37639279d07321e2953c1679736d7522bf] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36442 - 3189 "HINFO IN 3366320911654890393.3803559378883167572. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.108578268s
	
	
	==> describe nodes <==
	Name:               test-preload-147081
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-147081
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
	                    minikube.k8s.io/name=test-preload-147081
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T09_22_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 09:22:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-147081
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 09:23:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 09:23:48 +0000   Wed, 17 Dec 2025 09:22:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 09:23:48 +0000   Wed, 17 Dec 2025 09:22:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 09:23:48 +0000   Wed, 17 Dec 2025 09:22:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 09:23:48 +0000   Wed, 17 Dec 2025 09:23:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.188
	  Hostname:    test-preload-147081
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 40941968c4434559b3b9eeb44b068573
	  System UUID:                40941968-c443-4559-b3b9-eeb44b068573
	  Boot ID:                    260cc4c8-f9a9-49d8-85b0-c41a7f2e1870
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-qpwmc                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     65s
	  kube-system                 etcd-test-preload-147081                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         71s
	  kube-system                 kube-apiserver-test-preload-147081             250m (12%)    0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-controller-manager-test-preload-147081    200m (10%)    0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-proxy-pxpsd                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-scheduler-test-preload-147081             100m (5%)     0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 63s                kube-proxy       
	  Normal   Starting                 14s                kube-proxy       
	  Normal   NodeHasSufficientMemory  77s (x8 over 77s)  kubelet          Node test-preload-147081 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    77s (x8 over 77s)  kubelet          Node test-preload-147081 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     77s (x7 over 77s)  kubelet          Node test-preload-147081 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  77s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     71s                kubelet          Node test-preload-147081 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  71s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  71s                kubelet          Node test-preload-147081 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s                kubelet          Node test-preload-147081 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 71s                kubelet          Starting kubelet.
	  Normal   NodeReady                70s                kubelet          Node test-preload-147081 status is now: NodeReady
	  Normal   RegisteredNode           66s                node-controller  Node test-preload-147081 event: Registered Node test-preload-147081 in Controller
	  Normal   Starting                 20s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node test-preload-147081 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node test-preload-147081 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node test-preload-147081 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 16s                kubelet          Node test-preload-147081 has been rebooted, boot id: 260cc4c8-f9a9-49d8-85b0-c41a7f2e1870
	  Normal   RegisteredNode           12s                node-controller  Node test-preload-147081 event: Registered Node test-preload-147081 in Controller
	
	
	==> dmesg <==
	[Dec17 09:23] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000054] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007201] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.959109] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.103283] kauditd_printk_skb: 88 callbacks suppressed
	[  +2.039544] kauditd_printk_skb: 136 callbacks suppressed
	[  +1.880796] kauditd_printk_skb: 197 callbacks suppressed
	[  +8.850131] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [ad4e724528a30518fbf7d339a7b3456b04ad2733a60bc18ec53948880078fce3] <==
	{"level":"warn","ts":"2025-12-17T09:23:45.638994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.667501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.684112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.696081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.722949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.743811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.778173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.797567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.798088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.816707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.832271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.851111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.866814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.884924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.895104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.917825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.924469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.935309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.948976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.969614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.972475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.988454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:45.998782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:46.017938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T09:23:46.112890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55364","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:24:02 up 0 min,  0 users,  load average: 0.66, 0.17, 0.06
	Linux test-preload-147081 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 03:41:16 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [cd2bbccfeb7ad8541c22326eb80ece42caface6cc679e8ee24ce73b0a689ec5b] <==
	I1217 09:23:46.793803       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 09:23:46.793833       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 09:23:46.793950       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 09:23:46.794263       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 09:23:46.794333       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 09:23:46.794363       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 09:23:46.794403       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 09:23:46.812098       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 09:23:46.812743       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 09:23:46.813544       1 aggregator.go:171] initial CRD sync complete...
	I1217 09:23:46.814021       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 09:23:46.814059       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 09:23:46.814076       1 cache.go:39] Caches are synced for autoregister controller
	I1217 09:23:46.820843       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1217 09:23:46.857408       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 09:23:46.864386       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 09:23:47.175316       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 09:23:47.693749       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 09:23:48.480415       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 09:23:48.522163       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 09:23:48.549182       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 09:23:48.557495       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 09:23:50.531790       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 09:23:50.580242       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 09:23:50.631190       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [bf933fcdf259fa46aa3d829e680395481888d0a34e4e8886cc4f1c726c84531d] <==
	I1217 09:23:50.228499       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 09:23:50.228939       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1217 09:23:50.229169       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1217 09:23:50.229487       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 09:23:50.229468       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-147081"
	I1217 09:23:50.230204       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1217 09:23:50.232316       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 09:23:50.232347       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 09:23:50.233603       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 09:23:50.233616       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 09:23:50.233604       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 09:23:50.234046       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 09:23:50.235982       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 09:23:50.245485       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 09:23:50.250981       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 09:23:50.251057       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 09:23:50.251064       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 09:23:50.253632       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 09:23:50.260736       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 09:23:50.261932       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1217 09:23:50.262206       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 09:23:50.275090       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1217 09:23:50.276352       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 09:23:50.276416       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 09:23:50.276566       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [872d8488489a9eeefb60dfec8dee1689555de4de6d8f0d1c29f10626add4d2f9] <==
	I1217 09:23:47.867728       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 09:23:47.969909       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 09:23:47.970048       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.188"]
	E1217 09:23:47.970282       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 09:23:48.018638       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 09:23:48.018787       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 09:23:48.018890       1 server_linux.go:132] "Using iptables Proxier"
	I1217 09:23:48.027870       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 09:23:48.028248       1 server.go:527] "Version info" version="v1.34.3"
	I1217 09:23:48.028273       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 09:23:48.030179       1 config.go:200] "Starting service config controller"
	I1217 09:23:48.030205       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 09:23:48.030221       1 config.go:106] "Starting endpoint slice config controller"
	I1217 09:23:48.030225       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 09:23:48.030234       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 09:23:48.030237       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 09:23:48.035251       1 config.go:309] "Starting node config controller"
	I1217 09:23:48.035294       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 09:23:48.035312       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 09:23:48.131352       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 09:23:48.131380       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 09:23:48.131408       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f206f49b1e2174184bc5769a23b7892ae696ea32b8358b6ef64a17b809456ad0] <==
	I1217 09:23:45.156002       1 serving.go:386] Generated self-signed cert in-memory
	W1217 09:23:46.743913       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 09:23:46.743951       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 09:23:46.743961       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 09:23:46.743967       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 09:23:46.811046       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 09:23:46.811088       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 09:23:46.830089       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 09:23:46.830166       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 09:23:46.832229       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 09:23:46.832296       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 09:23:46.930353       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 09:23:46 test-preload-147081 kubelet[1178]: I1217 09:23:46.855360    1178 setters.go:543] "Node became not ready" node="test-preload-147081" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-17T09:23:46Z","lastTransitionTime":"2025-12-17T09:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Dec 17 09:23:46 test-preload-147081 kubelet[1178]: E1217 09:23:46.880591    1178 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-147081\" already exists" pod="kube-system/kube-controller-manager-test-preload-147081"
	Dec 17 09:23:46 test-preload-147081 kubelet[1178]: I1217 09:23:46.880614    1178 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-147081"
	Dec 17 09:23:46 test-preload-147081 kubelet[1178]: E1217 09:23:46.896168    1178 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-147081\" already exists" pod="kube-system/kube-scheduler-test-preload-147081"
	Dec 17 09:23:46 test-preload-147081 kubelet[1178]: I1217 09:23:46.896205    1178 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-147081"
	Dec 17 09:23:46 test-preload-147081 kubelet[1178]: E1217 09:23:46.903621    1178 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-test-preload-147081\" already exists" pod="kube-system/etcd-test-preload-147081"
	Dec 17 09:23:47 test-preload-147081 kubelet[1178]: I1217 09:23:47.136402    1178 apiserver.go:52] "Watching apiserver"
	Dec 17 09:23:47 test-preload-147081 kubelet[1178]: E1217 09:23:47.144435    1178 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-qpwmc" podUID="0079d78a-a139-433a-8877-8c077b9d21a6"
	Dec 17 09:23:47 test-preload-147081 kubelet[1178]: I1217 09:23:47.154300    1178 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 17 09:23:47 test-preload-147081 kubelet[1178]: I1217 09:23:47.171403    1178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/395b53e0-b961-4db7-8e19-f60896249958-tmp\") pod \"storage-provisioner\" (UID: \"395b53e0-b961-4db7-8e19-f60896249958\") " pod="kube-system/storage-provisioner"
	Dec 17 09:23:47 test-preload-147081 kubelet[1178]: I1217 09:23:47.171539    1178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f76af27d-2483-4c40-a538-611f05087898-xtables-lock\") pod \"kube-proxy-pxpsd\" (UID: \"f76af27d-2483-4c40-a538-611f05087898\") " pod="kube-system/kube-proxy-pxpsd"
	Dec 17 09:23:47 test-preload-147081 kubelet[1178]: I1217 09:23:47.171637    1178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f76af27d-2483-4c40-a538-611f05087898-lib-modules\") pod \"kube-proxy-pxpsd\" (UID: \"f76af27d-2483-4c40-a538-611f05087898\") " pod="kube-system/kube-proxy-pxpsd"
	Dec 17 09:23:47 test-preload-147081 kubelet[1178]: E1217 09:23:47.172169    1178 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 17 09:23:47 test-preload-147081 kubelet[1178]: E1217 09:23:47.172257    1178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0079d78a-a139-433a-8877-8c077b9d21a6-config-volume podName:0079d78a-a139-433a-8877-8c077b9d21a6 nodeName:}" failed. No retries permitted until 2025-12-17 09:23:47.672232456 +0000 UTC m=+5.629434185 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0079d78a-a139-433a-8877-8c077b9d21a6-config-volume") pod "coredns-66bc5c9577-qpwmc" (UID: "0079d78a-a139-433a-8877-8c077b9d21a6") : object "kube-system"/"coredns" not registered
	Dec 17 09:23:47 test-preload-147081 kubelet[1178]: E1217 09:23:47.676987    1178 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 17 09:23:47 test-preload-147081 kubelet[1178]: E1217 09:23:47.677879    1178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0079d78a-a139-433a-8877-8c077b9d21a6-config-volume podName:0079d78a-a139-433a-8877-8c077b9d21a6 nodeName:}" failed. No retries permitted until 2025-12-17 09:23:48.677859552 +0000 UTC m=+6.635061281 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0079d78a-a139-433a-8877-8c077b9d21a6-config-volume") pod "coredns-66bc5c9577-qpwmc" (UID: "0079d78a-a139-433a-8877-8c077b9d21a6") : object "kube-system"/"coredns" not registered
	Dec 17 09:23:48 test-preload-147081 kubelet[1178]: I1217 09:23:48.308318    1178 scope.go:117] "RemoveContainer" containerID="86d1182200b08151792cf3b8d825055a4ca9f297718a382500b97e1d4373d2fc"
	Dec 17 09:23:48 test-preload-147081 kubelet[1178]: E1217 09:23:48.683293    1178 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 17 09:23:48 test-preload-147081 kubelet[1178]: E1217 09:23:48.683353    1178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0079d78a-a139-433a-8877-8c077b9d21a6-config-volume podName:0079d78a-a139-433a-8877-8c077b9d21a6 nodeName:}" failed. No retries permitted until 2025-12-17 09:23:50.683340325 +0000 UTC m=+8.640542042 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0079d78a-a139-433a-8877-8c077b9d21a6-config-volume") pod "coredns-66bc5c9577-qpwmc" (UID: "0079d78a-a139-433a-8877-8c077b9d21a6") : object "kube-system"/"coredns" not registered
	Dec 17 09:23:48 test-preload-147081 kubelet[1178]: I1217 09:23:48.758949    1178 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 17 09:23:52 test-preload-147081 kubelet[1178]: E1217 09:23:52.211893    1178 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765963432211513174  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135813}  inodes_used:{value:64}}"
	Dec 17 09:23:52 test-preload-147081 kubelet[1178]: E1217 09:23:52.211939    1178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765963432211513174  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135813}  inodes_used:{value:64}}"
	Dec 17 09:23:59 test-preload-147081 kubelet[1178]: I1217 09:23:59.618321    1178 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 17 09:24:02 test-preload-147081 kubelet[1178]: E1217 09:24:02.213404    1178 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765963442213010050  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135813}  inodes_used:{value:64}}"
	Dec 17 09:24:02 test-preload-147081 kubelet[1178]: E1217 09:24:02.213445    1178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765963442213010050  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135813}  inodes_used:{value:64}}"
	
	
	==> storage-provisioner [841553aff06c77b4e6833dfa2f432932eac9be38589764dbdef2921857f43ddc] <==
	I1217 09:23:48.456213       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 09:23:48.473884       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 09:23:48.473977       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 09:23:48.476804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 09:23:51.932769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 09:23:56.193553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 09:23:59.791790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [86d1182200b08151792cf3b8d825055a4ca9f297718a382500b97e1d4373d2fc] <==
	I1217 09:23:47.780974       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 09:23:47.784116       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-147081 -n test-preload-147081
helpers_test.go:270: (dbg) Run:  kubectl --context test-preload-147081 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:176: Cleaning up "test-preload-147081" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-147081
--- FAIL: TestPreload (117.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (45.42s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-869559 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-869559 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.450591074s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-869559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-869559" primary control-plane node in "pause-869559" cluster
	* Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-869559" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 09:30:01.119946  933201 out.go:360] Setting OutFile to fd 1 ...
	I1217 09:30:01.120092  933201 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:30:01.120101  933201 out.go:374] Setting ErrFile to fd 2...
	I1217 09:30:01.120106  933201 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:30:01.120307  933201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 09:30:01.120776  933201 out.go:368] Setting JSON to false
	I1217 09:30:01.121771  933201 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":15147,"bootTime":1765948654,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 09:30:01.121852  933201 start.go:143] virtualization: kvm guest
	I1217 09:30:01.123724  933201 out.go:179] * [pause-869559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 09:30:01.125561  933201 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 09:30:01.125559  933201 notify.go:221] Checking for updates...
	I1217 09:30:01.128168  933201 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 09:30:01.129479  933201 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	I1217 09:30:01.130893  933201 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 09:30:01.135709  933201 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 09:30:01.137044  933201 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 09:30:01.138983  933201 config.go:182] Loaded profile config "pause-869559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 09:30:01.139773  933201 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 09:30:01.185643  933201 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 09:30:01.186924  933201 start.go:309] selected driver: kvm2
	I1217 09:30:01.186946  933201 start.go:927] validating driver "kvm2" against &{Name:pause-869559 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.3 ClusterName:pause-869559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 09:30:01.187130  933201 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 09:30:01.188405  933201 cni.go:84] Creating CNI manager for ""
	I1217 09:30:01.188482  933201 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 09:30:01.188567  933201 start.go:353] cluster config:
	{Name:pause-869559 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-869559 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 09:30:01.188747  933201 iso.go:125] acquiring lock: {Name:mk258687bf3be9c6817f84af5b9e08a4f47b5420 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 09:30:01.191369  933201 out.go:179] * Starting "pause-869559" primary control-plane node in "pause-869559" cluster
	I1217 09:30:01.192660  933201 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 09:30:01.192694  933201 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-893359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 09:30:01.192711  933201 cache.go:65] Caching tarball of preloaded images
	I1217 09:30:01.192930  933201 preload.go:238] Found /home/jenkins/minikube-integration/22182-893359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 09:30:01.192950  933201 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 09:30:01.193160  933201 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/config.json ...
	I1217 09:30:01.193411  933201 start.go:360] acquireMachinesLock for pause-869559: {Name:mkdc91ccb2d66cdada71da88e972b4d333b7f63c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 09:30:02.213589  933201 start.go:364] duration metric: took 1.02012495s to acquireMachinesLock for "pause-869559"
	I1217 09:30:02.213661  933201 start.go:96] Skipping create...Using existing machine configuration
	I1217 09:30:02.213671  933201 fix.go:54] fixHost starting: 
	I1217 09:30:02.216386  933201 fix.go:112] recreateIfNeeded on pause-869559: state=Running err=<nil>
	W1217 09:30:02.216432  933201 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 09:30:02.217982  933201 out.go:252] * Updating the running kvm2 "pause-869559" VM ...
	I1217 09:30:02.218017  933201 machine.go:94] provisionDockerMachine start ...
	I1217 09:30:02.222109  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.222619  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:02.222651  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.222907  933201 main.go:143] libmachine: Using SSH client type: native
	I1217 09:30:02.223032  933201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1217 09:30:02.223047  933201 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 09:30:02.344762  933201 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-869559
	
	I1217 09:30:02.344809  933201 buildroot.go:166] provisioning hostname "pause-869559"
	I1217 09:30:02.348306  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.348882  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:02.348922  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.349141  933201 main.go:143] libmachine: Using SSH client type: native
	I1217 09:30:02.349257  933201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1217 09:30:02.349274  933201 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-869559 && echo "pause-869559" | sudo tee /etc/hostname
	I1217 09:30:02.479333  933201 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-869559
	
	I1217 09:30:02.482491  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.483012  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:02.483051  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.483239  933201 main.go:143] libmachine: Using SSH client type: native
	I1217 09:30:02.483338  933201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1217 09:30:02.483361  933201 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-869559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-869559/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-869559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 09:30:02.605593  933201 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 09:30:02.605630  933201 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22182-893359/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-893359/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-893359/.minikube}
	I1217 09:30:02.605667  933201 buildroot.go:174] setting up certificates
	I1217 09:30:02.605676  933201 provision.go:84] configureAuth start
	I1217 09:30:02.609261  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.609917  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:02.609954  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.613098  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.613551  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:02.613608  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.613769  933201 provision.go:143] copyHostCerts
	I1217 09:30:02.613834  933201 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-893359/.minikube/ca.pem, removing ...
	I1217 09:30:02.613855  933201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-893359/.minikube/ca.pem
	I1217 09:30:02.613923  933201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-893359/.minikube/ca.pem (1078 bytes)
	I1217 09:30:02.614062  933201 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-893359/.minikube/cert.pem, removing ...
	I1217 09:30:02.614075  933201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-893359/.minikube/cert.pem
	I1217 09:30:02.614109  933201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-893359/.minikube/cert.pem (1123 bytes)
	I1217 09:30:02.614189  933201 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-893359/.minikube/key.pem, removing ...
	I1217 09:30:02.614200  933201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-893359/.minikube/key.pem
	I1217 09:30:02.614230  933201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-893359/.minikube/key.pem (1675 bytes)
	I1217 09:30:02.614297  933201 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-893359/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca-key.pem org=jenkins.pause-869559 san=[127.0.0.1 192.168.39.212 localhost minikube pause-869559]
	I1217 09:30:02.784849  933201 provision.go:177] copyRemoteCerts
	I1217 09:30:02.784927  933201 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 09:30:02.788256  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.788861  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:02.788902  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.789126  933201 sshutil.go:56] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/pause-869559/id_rsa Username:docker}
	I1217 09:30:02.877674  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 09:30:02.910923  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 09:30:02.947310  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 09:30:02.984536  933201 provision.go:87] duration metric: took 378.825094ms to configureAuth
	I1217 09:30:02.984573  933201 buildroot.go:189] setting minikube options for container-runtime
	I1217 09:30:02.984890  933201 config.go:182] Loaded profile config "pause-869559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 09:30:02.988586  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.989118  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:02.989159  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.989434  933201 main.go:143] libmachine: Using SSH client type: native
	I1217 09:30:02.989590  933201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1217 09:30:02.989608  933201 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 09:30:08.570005  933201 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 09:30:08.570039  933201 machine.go:97] duration metric: took 6.352010416s to provisionDockerMachine
	I1217 09:30:08.570055  933201 start.go:293] postStartSetup for "pause-869559" (driver="kvm2")
	I1217 09:30:08.570069  933201 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 09:30:08.570135  933201 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 09:30:08.574341  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.576044  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:08.576107  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.576543  933201 sshutil.go:56] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/pause-869559/id_rsa Username:docker}
	I1217 09:30:08.660490  933201 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 09:30:08.666474  933201 info.go:137] Remote host: Buildroot 2025.02
	I1217 09:30:08.666521  933201 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-893359/.minikube/addons for local assets ...
	I1217 09:30:08.666614  933201 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-893359/.minikube/files for local assets ...
	I1217 09:30:08.666721  933201 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-893359/.minikube/files/etc/ssl/certs/8972772.pem -> 8972772.pem in /etc/ssl/certs
	I1217 09:30:08.666867  933201 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 09:30:08.680185  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/files/etc/ssl/certs/8972772.pem --> /etc/ssl/certs/8972772.pem (1708 bytes)
	I1217 09:30:08.714215  933201 start.go:296] duration metric: took 144.139489ms for postStartSetup
	I1217 09:30:08.714273  933201 fix.go:56] duration metric: took 6.500603085s for fixHost
	I1217 09:30:08.717448  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.717959  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:08.717991  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.718232  933201 main.go:143] libmachine: Using SSH client type: native
	I1217 09:30:08.718342  933201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1217 09:30:08.718356  933201 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 09:30:08.825023  933201 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765963808.815889037
	
	I1217 09:30:08.825052  933201 fix.go:216] guest clock: 1765963808.815889037
	I1217 09:30:08.825062  933201 fix.go:229] Guest: 2025-12-17 09:30:08.815889037 +0000 UTC Remote: 2025-12-17 09:30:08.714280264 +0000 UTC m=+7.657224610 (delta=101.608773ms)
	I1217 09:30:08.825084  933201 fix.go:200] guest clock delta is within tolerance: 101.608773ms
	I1217 09:30:08.825103  933201 start.go:83] releasing machines lock for "pause-869559", held for 6.611463696s
	I1217 09:30:08.828854  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.829411  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:08.829449  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.830142  933201 ssh_runner.go:195] Run: cat /version.json
	I1217 09:30:08.830218  933201 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 09:30:08.833957  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.834185  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.834471  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:08.834520  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.834691  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:08.834723  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.834719  933201 sshutil.go:56] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/pause-869559/id_rsa Username:docker}
	I1217 09:30:08.834985  933201 sshutil.go:56] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/pause-869559/id_rsa Username:docker}
	I1217 09:30:08.923887  933201 ssh_runner.go:195] Run: systemctl --version
	I1217 09:30:08.952615  933201 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 09:30:09.118289  933201 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 09:30:09.129662  933201 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 09:30:09.129750  933201 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 09:30:09.141994  933201 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 09:30:09.142026  933201 start.go:496] detecting cgroup driver to use...
	I1217 09:30:09.142119  933201 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 09:30:09.162845  933201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 09:30:09.181009  933201 docker.go:218] disabling cri-docker service (if available) ...
	I1217 09:30:09.181095  933201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 09:30:09.201333  933201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 09:30:09.218961  933201 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 09:30:09.412029  933201 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 09:30:09.589245  933201 docker.go:234] disabling docker service ...
	I1217 09:30:09.589327  933201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 09:30:09.623748  933201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 09:30:09.643179  933201 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 09:30:09.842329  933201 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 09:30:10.038112  933201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 09:30:10.055605  933201 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 09:30:10.082343  933201 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 09:30:10.082420  933201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:30:10.097457  933201 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 09:30:10.097567  933201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:30:10.114705  933201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:30:10.129082  933201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:30:10.142573  933201 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 09:30:10.157154  933201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:30:10.169859  933201 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:30:10.186318  933201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:30:10.203561  933201 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 09:30:10.214614  933201 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 09:30:10.225668  933201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 09:30:10.399705  933201 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 09:30:10.630565  933201 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 09:30:10.630665  933201 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 09:30:10.637597  933201 start.go:564] Will wait 60s for crictl version
	I1217 09:30:10.637667  933201 ssh_runner.go:195] Run: which crictl
	I1217 09:30:10.642067  933201 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 09:30:10.677915  933201 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 09:30:10.677992  933201 ssh_runner.go:195] Run: crio --version
	I1217 09:30:10.710746  933201 ssh_runner.go:195] Run: crio --version
	I1217 09:30:10.745685  933201 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1217 09:30:10.749691  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:10.750144  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:10.750176  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:10.750403  933201 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1217 09:30:10.755422  933201 kubeadm.go:884] updating cluster {Name:pause-869559 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3
ClusterName:pause-869559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 09:30:10.755630  933201 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 09:30:10.755703  933201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 09:30:10.797537  933201 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 09:30:10.797563  933201 crio.go:433] Images already preloaded, skipping extraction
	I1217 09:30:10.797618  933201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 09:30:10.835028  933201 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 09:30:10.835060  933201 cache_images.go:86] Images are preloaded, skipping loading
	I1217 09:30:10.835072  933201 kubeadm.go:935] updating node { 192.168.39.212 8443 v1.34.3 crio true true} ...
	I1217 09:30:10.835213  933201 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-869559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-869559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 09:30:10.835312  933201 ssh_runner.go:195] Run: crio config
	I1217 09:30:10.883111  933201 cni.go:84] Creating CNI manager for ""
	I1217 09:30:10.883150  933201 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 09:30:10.883171  933201 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 09:30:10.883193  933201 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-869559 NodeName:pause-869559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 09:30:10.883327  933201 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-869559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.212"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 09:30:10.883396  933201 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 09:30:10.897455  933201 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 09:30:10.897546  933201 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 09:30:10.909140  933201 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1217 09:30:10.931429  933201 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 09:30:10.952610  933201 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1217 09:30:10.974694  933201 ssh_runner.go:195] Run: grep 192.168.39.212	control-plane.minikube.internal$ /etc/hosts
	I1217 09:30:10.979493  933201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 09:30:11.163980  933201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 09:30:11.188259  933201 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559 for IP: 192.168.39.212
	I1217 09:30:11.188285  933201 certs.go:195] generating shared ca certs ...
	I1217 09:30:11.188305  933201 certs.go:227] acquiring lock for ca certs: {Name:mk9975fd3c0c6324a63f90fa6e20c46f3034e6ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 09:30:11.188473  933201 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-893359/.minikube/ca.key
	I1217 09:30:11.188561  933201 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-893359/.minikube/proxy-client-ca.key
	I1217 09:30:11.188585  933201 certs.go:257] generating profile certs ...
	I1217 09:30:11.188707  933201 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/client.key
	I1217 09:30:11.188802  933201 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/apiserver.key.873948f8
	I1217 09:30:11.188878  933201 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/proxy-client.key
	I1217 09:30:11.189026  933201 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/897277.pem (1338 bytes)
	W1217 09:30:11.189071  933201 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-893359/.minikube/certs/897277_empty.pem, impossibly tiny 0 bytes
	I1217 09:30:11.189087  933201 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 09:30:11.189132  933201 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem (1078 bytes)
	I1217 09:30:11.189179  933201 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/cert.pem (1123 bytes)
	I1217 09:30:11.189218  933201 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/key.pem (1675 bytes)
	I1217 09:30:11.189281  933201 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/files/etc/ssl/certs/8972772.pem (1708 bytes)
	I1217 09:30:11.189920  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 09:30:11.223103  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 09:30:11.258163  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 09:30:11.290700  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 09:30:11.336294  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 09:30:11.373412  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 09:30:11.407240  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 09:30:11.440057  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 09:30:11.481401  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 09:30:11.606882  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/certs/897277.pem --> /usr/share/ca-certificates/897277.pem (1338 bytes)
	I1217 09:30:11.692782  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/files/etc/ssl/certs/8972772.pem --> /usr/share/ca-certificates/8972772.pem (1708 bytes)
	I1217 09:30:11.787594  933201 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 09:30:11.868730  933201 ssh_runner.go:195] Run: openssl version
	I1217 09:30:11.884874  933201 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 09:30:11.949201  933201 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 09:30:12.020522  933201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 09:30:12.037145  933201 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 08:16 /usr/share/ca-certificates/minikubeCA.pem
	I1217 09:30:12.037251  933201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 09:30:12.053797  933201 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 09:30:12.092653  933201 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/897277.pem
	I1217 09:30:12.126174  933201 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/897277.pem /etc/ssl/certs/897277.pem
	I1217 09:30:12.159722  933201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/897277.pem
	I1217 09:30:12.176614  933201 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 08:35 /usr/share/ca-certificates/897277.pem
	I1217 09:30:12.176720  933201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/897277.pem
	I1217 09:30:12.203040  933201 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 09:30:12.223867  933201 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8972772.pem
	I1217 09:30:12.248084  933201 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8972772.pem /etc/ssl/certs/8972772.pem
	I1217 09:30:12.274635  933201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8972772.pem
	I1217 09:30:12.292395  933201 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 08:35 /usr/share/ca-certificates/8972772.pem
	I1217 09:30:12.292463  933201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8972772.pem
	I1217 09:30:12.314704  933201 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 09:30:12.347685  933201 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 09:30:12.356735  933201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 09:30:12.373390  933201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 09:30:12.402432  933201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 09:30:12.421754  933201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 09:30:12.441110  933201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 09:30:12.471578  933201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 09:30:12.496274  933201 kubeadm.go:401] StartCluster: {Name:pause-869559 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 Cl
usterName:pause-869559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 09:30:12.496454  933201 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 09:30:12.496546  933201 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 09:30:12.625666  933201 cri.go:89] found id: "e0337c6439f8b8fce3bb0b66c3433d27794c3f9fa268ba8b7117cdda47e7ac5b"
	I1217 09:30:12.625696  933201 cri.go:89] found id: "4f631c9821b5e83eed22adff990fd2581e6c144822df0e654d14bd419b364ac6"
	I1217 09:30:12.625703  933201 cri.go:89] found id: "e331a882f16a8c302e6391a48abd39817a2b42d9c81de9f1b744ae81e2a67ad7"
	I1217 09:30:12.625708  933201 cri.go:89] found id: "2abdf54511473a3eef2b8ef0906e02182f4eb5e5f0bb0c765961af5b82cfce71"
	I1217 09:30:12.625713  933201 cri.go:89] found id: "86f9b55f6bae4f650c85a2f7d60899240c9afcb2f5ab7b3a2a8a69519d939917"
	I1217 09:30:12.625718  933201 cri.go:89] found id: "47326fe04fc5bff2fe0eb071e7d7d76e2e37d6be23dcd9075195432501497e5e"
	I1217 09:30:12.625723  933201 cri.go:89] found id: "e06d3f3a0193a41ffac4306c738bb419c8aa15f440fec5d686635327ea6a97ed"
	I1217 09:30:12.625729  933201 cri.go:89] found id: ""
	I1217 09:30:12.625785  933201 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-869559 -n pause-869559
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-869559 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-869559 logs -n 25: (1.323956484s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-960765 sudo systemctl cat crio --no-pager                                                                                                         │ cilium-960765             │ jenkins │ v1.37.0 │ 17 Dec 25 09:27 UTC │                     │
	│ ssh     │ -p cilium-960765 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                               │ cilium-960765             │ jenkins │ v1.37.0 │ 17 Dec 25 09:27 UTC │                     │
	│ ssh     │ -p cilium-960765 sudo crio config                                                                                                                           │ cilium-960765             │ jenkins │ v1.37.0 │ 17 Dec 25 09:27 UTC │                     │
	│ delete  │ -p cilium-960765                                                                                                                                            │ cilium-960765             │ jenkins │ v1.37.0 │ 17 Dec 25 09:27 UTC │ 17 Dec 25 09:27 UTC │
	│ start   │ -p running-upgrade-879489 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                          │ running-upgrade-879489    │ jenkins │ v1.35.0 │ 17 Dec 25 09:27 UTC │ 17 Dec 25 09:28 UTC │
	│ start   │ -p NoKubernetes-229767 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                         │ NoKubernetes-229767       │ jenkins │ v1.37.0 │ 17 Dec 25 09:27 UTC │ 17 Dec 25 09:28 UTC │
	│ start   │ -p kubernetes-upgrade-317314 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                             │ kubernetes-upgrade-317314 │ jenkins │ v1.37.0 │ 17 Dec 25 09:27 UTC │                     │
	│ start   │ -p kubernetes-upgrade-317314 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                 │ kubernetes-upgrade-317314 │ jenkins │ v1.37.0 │ 17 Dec 25 09:27 UTC │ 17 Dec 25 09:28 UTC │
	│ stop    │ stopped-upgrade-916798 stop                                                                                                                                 │ stopped-upgrade-916798    │ jenkins │ v1.35.0 │ 17 Dec 25 09:28 UTC │ 17 Dec 25 09:28 UTC │
	│ start   │ -p stopped-upgrade-916798 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ stopped-upgrade-916798    │ jenkins │ v1.37.0 │ 17 Dec 25 09:28 UTC │ 17 Dec 25 09:28 UTC │
	│ delete  │ -p NoKubernetes-229767                                                                                                                                      │ NoKubernetes-229767       │ jenkins │ v1.37.0 │ 17 Dec 25 09:28 UTC │ 17 Dec 25 09:28 UTC │
	│ start   │ -p NoKubernetes-229767 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                         │ NoKubernetes-229767       │ jenkins │ v1.37.0 │ 17 Dec 25 09:28 UTC │ 17 Dec 25 09:29 UTC │
	│ start   │ -p running-upgrade-879489 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ running-upgrade-879489    │ jenkins │ v1.37.0 │ 17 Dec 25 09:28 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-317314                                                                                                                                │ kubernetes-upgrade-317314 │ jenkins │ v1.37.0 │ 17 Dec 25 09:28 UTC │ 17 Dec 25 09:28 UTC │
	│ start   │ -p pause-869559 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                     │ pause-869559              │ jenkins │ v1.37.0 │ 17 Dec 25 09:28 UTC │ 17 Dec 25 09:30 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-916798 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-916798    │ jenkins │ v1.37.0 │ 17 Dec 25 09:28 UTC │                     │
	│ delete  │ -p stopped-upgrade-916798                                                                                                                                   │ stopped-upgrade-916798    │ jenkins │ v1.37.0 │ 17 Dec 25 09:28 UTC │ 17 Dec 25 09:28 UTC │
	│ start   │ -p cert-expiration-779044 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                        │ cert-expiration-779044    │ jenkins │ v1.37.0 │ 17 Dec 25 09:28 UTC │ 17 Dec 25 09:30 UTC │
	│ ssh     │ -p NoKubernetes-229767 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-229767       │ jenkins │ v1.37.0 │ 17 Dec 25 09:29 UTC │                     │
	│ stop    │ -p NoKubernetes-229767                                                                                                                                      │ NoKubernetes-229767       │ jenkins │ v1.37.0 │ 17 Dec 25 09:29 UTC │ 17 Dec 25 09:29 UTC │
	│ start   │ -p NoKubernetes-229767 --driver=kvm2  --container-runtime=crio                                                                                              │ NoKubernetes-229767       │ jenkins │ v1.37.0 │ 17 Dec 25 09:29 UTC │ 17 Dec 25 09:30 UTC │
	│ start   │ -p pause-869559 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-869559              │ jenkins │ v1.37.0 │ 17 Dec 25 09:30 UTC │ 17 Dec 25 09:30 UTC │
	│ ssh     │ -p NoKubernetes-229767 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-229767       │ jenkins │ v1.37.0 │ 17 Dec 25 09:30 UTC │                     │
	│ delete  │ -p NoKubernetes-229767                                                                                                                                      │ NoKubernetes-229767       │ jenkins │ v1.37.0 │ 17 Dec 25 09:30 UTC │ 17 Dec 25 09:30 UTC │
	│ start   │ -p force-systemd-flag-040357 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                   │ force-systemd-flag-040357 │ jenkins │ v1.37.0 │ 17 Dec 25 09:30 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 09:30:05
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 09:30:05.256531  933313 out.go:360] Setting OutFile to fd 1 ...
	I1217 09:30:05.256649  933313 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:30:05.256658  933313 out.go:374] Setting ErrFile to fd 2...
	I1217 09:30:05.256662  933313 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:30:05.256843  933313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 09:30:05.257322  933313 out.go:368] Setting JSON to false
	I1217 09:30:05.258252  933313 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":15151,"bootTime":1765948654,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 09:30:05.258305  933313 start.go:143] virtualization: kvm guest
	I1217 09:30:05.260413  933313 out.go:179] * [force-systemd-flag-040357] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 09:30:05.261619  933313 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 09:30:05.261611  933313 notify.go:221] Checking for updates...
	I1217 09:30:05.263866  933313 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 09:30:05.265066  933313 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	I1217 09:30:05.266089  933313 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 09:30:05.267216  933313 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 09:30:05.268361  933313 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 09:30:05.270088  933313 config.go:182] Loaded profile config "cert-expiration-779044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 09:30:05.270313  933313 config.go:182] Loaded profile config "pause-869559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 09:30:05.270455  933313 config.go:182] Loaded profile config "running-upgrade-879489": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1217 09:30:05.270590  933313 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 09:30:05.305722  933313 out.go:179] * Using the kvm2 driver based on user configuration
	I1217 09:30:05.306712  933313 start.go:309] selected driver: kvm2
	I1217 09:30:05.306733  933313 start.go:927] validating driver "kvm2" against <nil>
	I1217 09:30:05.306747  933313 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 09:30:05.307454  933313 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 09:30:05.307735  933313 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 09:30:05.307774  933313 cni.go:84] Creating CNI manager for ""
	I1217 09:30:05.307820  933313 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 09:30:05.307830  933313 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 09:30:05.307869  933313 start.go:353] cluster config:
	{Name:force-systemd-flag-040357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:force-systemd-flag-040357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 09:30:05.307959  933313 iso.go:125] acquiring lock: {Name:mk258687bf3be9c6817f84af5b9e08a4f47b5420 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 09:30:05.309328  933313 out.go:179] * Starting "force-systemd-flag-040357" primary control-plane node in "force-systemd-flag-040357" cluster
	I1217 09:30:02.217982  933201 out.go:252] * Updating the running kvm2 "pause-869559" VM ...
	I1217 09:30:02.218017  933201 machine.go:94] provisionDockerMachine start ...
	I1217 09:30:02.222109  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.222619  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:02.222651  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.222907  933201 main.go:143] libmachine: Using SSH client type: native
	I1217 09:30:02.223032  933201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1217 09:30:02.223047  933201 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 09:30:02.344762  933201 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-869559
	
	I1217 09:30:02.344809  933201 buildroot.go:166] provisioning hostname "pause-869559"
	I1217 09:30:02.348306  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.348882  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:02.348922  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.349141  933201 main.go:143] libmachine: Using SSH client type: native
	I1217 09:30:02.349257  933201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1217 09:30:02.349274  933201 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-869559 && echo "pause-869559" | sudo tee /etc/hostname
	I1217 09:30:02.479333  933201 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-869559
	
	I1217 09:30:02.482491  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.483012  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:02.483051  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.483239  933201 main.go:143] libmachine: Using SSH client type: native
	I1217 09:30:02.483338  933201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1217 09:30:02.483361  933201 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-869559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-869559/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-869559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 09:30:02.605593  933201 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 09:30:02.605630  933201 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22182-893359/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-893359/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-893359/.minikube}
	I1217 09:30:02.605667  933201 buildroot.go:174] setting up certificates
	I1217 09:30:02.605676  933201 provision.go:84] configureAuth start
	I1217 09:30:02.609261  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.609917  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:02.609954  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.613098  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.613551  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:02.613608  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.613769  933201 provision.go:143] copyHostCerts
	I1217 09:30:02.613834  933201 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-893359/.minikube/ca.pem, removing ...
	I1217 09:30:02.613855  933201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-893359/.minikube/ca.pem
	I1217 09:30:02.613923  933201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-893359/.minikube/ca.pem (1078 bytes)
	I1217 09:30:02.614062  933201 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-893359/.minikube/cert.pem, removing ...
	I1217 09:30:02.614075  933201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-893359/.minikube/cert.pem
	I1217 09:30:02.614109  933201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-893359/.minikube/cert.pem (1123 bytes)
	I1217 09:30:02.614189  933201 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-893359/.minikube/key.pem, removing ...
	I1217 09:30:02.614200  933201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-893359/.minikube/key.pem
	I1217 09:30:02.614230  933201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-893359/.minikube/key.pem (1675 bytes)
	I1217 09:30:02.614297  933201 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-893359/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca-key.pem org=jenkins.pause-869559 san=[127.0.0.1 192.168.39.212 localhost minikube pause-869559]
	I1217 09:30:02.784849  933201 provision.go:177] copyRemoteCerts
	I1217 09:30:02.784927  933201 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 09:30:02.788256  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.788861  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:02.788902  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.789126  933201 sshutil.go:56] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/pause-869559/id_rsa Username:docker}
	I1217 09:30:02.877674  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 09:30:02.910923  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 09:30:02.947310  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 09:30:02.984536  933201 provision.go:87] duration metric: took 378.825094ms to configureAuth
	I1217 09:30:02.984573  933201 buildroot.go:189] setting minikube options for container-runtime
	I1217 09:30:02.984890  933201 config.go:182] Loaded profile config "pause-869559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 09:30:02.988586  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.989118  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:02.989159  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.989434  933201 main.go:143] libmachine: Using SSH client type: native
	I1217 09:30:02.989590  933201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1217 09:30:02.989608  933201 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 09:30:06.438127  931969 api_server.go:269] stopped: https://192.168.61.151:8443/healthz: Get "https://192.168.61.151:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 09:30:06.438181  931969 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1217 09:30:05.310422  933313 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 09:30:05.310459  933313 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-893359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 09:30:05.310475  933313 cache.go:65] Caching tarball of preloaded images
	I1217 09:30:05.310583  933313 preload.go:238] Found /home/jenkins/minikube-integration/22182-893359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 09:30:05.310597  933313 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 09:30:05.310690  933313 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/force-systemd-flag-040357/config.json ...
	I1217 09:30:05.310708  933313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/force-systemd-flag-040357/config.json: {Name:mk3e7b8f9ee06c6e6da563d4ef34958d4516e0d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 09:30:05.310849  933313 start.go:360] acquireMachinesLock for force-systemd-flag-040357: {Name:mkdc91ccb2d66cdada71da88e972b4d333b7f63c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 09:30:08.825242  933313 start.go:364] duration metric: took 3.514317001s to acquireMachinesLock for "force-systemd-flag-040357"
	I1217 09:30:08.825324  933313 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-040357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.34.3 ClusterName:force-systemd-flag-040357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 09:30:08.825465  933313 start.go:125] createHost starting for "" (driver="kvm2")
	I1217 09:30:08.827092  933313 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1217 09:30:08.827376  933313 start.go:159] libmachine.API.Create for "force-systemd-flag-040357" (driver="kvm2")
	I1217 09:30:08.827428  933313 client.go:173] LocalClient.Create starting
	I1217 09:30:08.827523  933313 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem
	I1217 09:30:08.827575  933313 main.go:143] libmachine: Decoding PEM data...
	I1217 09:30:08.827607  933313 main.go:143] libmachine: Parsing certificate...
	I1217 09:30:08.827697  933313 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-893359/.minikube/certs/cert.pem
	I1217 09:30:08.827742  933313 main.go:143] libmachine: Decoding PEM data...
	I1217 09:30:08.827762  933313 main.go:143] libmachine: Parsing certificate...
	I1217 09:30:08.828250  933313 main.go:143] libmachine: creating domain...
	I1217 09:30:08.828270  933313 main.go:143] libmachine: creating network...
	I1217 09:30:08.830146  933313 main.go:143] libmachine: found existing default network
	I1217 09:30:08.830662  933313 main.go:143] libmachine: <network connections='3'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 09:30:08.832098  933313 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ec:45:aa} reservation:<nil>}
	I1217 09:30:08.833195  933313 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:32:78:b7} reservation:<nil>}
	I1217 09:30:08.833964  933313 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:9e:dc:2a} reservation:<nil>}
	I1217 09:30:08.835114  933313 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d2e3d0}
	I1217 09:30:08.835222  933313 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-force-systemd-flag-040357</name>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 09:30:08.841379  933313 main.go:143] libmachine: creating private network mk-force-systemd-flag-040357 192.168.72.0/24...
	I1217 09:30:08.926819  933313 main.go:143] libmachine: private network mk-force-systemd-flag-040357 192.168.72.0/24 created
	I1217 09:30:08.927174  933313 main.go:143] libmachine: <network>
	  <name>mk-force-systemd-flag-040357</name>
	  <uuid>8e3703a1-1a83-42e1-a5fe-10cba9c7e5b5</uuid>
	  <bridge name='virbr4' stp='on' delay='0'/>
	  <mac address='52:54:00:a8:24:57'/>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 09:30:08.927229  933313 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22182-893359/.minikube/machines/force-systemd-flag-040357 ...
	I1217 09:30:08.927273  933313 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22182-893359/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso
	I1217 09:30:08.927286  933313 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 09:30:08.927390  933313 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22182-893359/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22182-893359/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso...
	I1217 09:30:09.207315  933313 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22182-893359/.minikube/machines/force-systemd-flag-040357/id_rsa...
	I1217 09:30:09.208821  933313 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22182-893359/.minikube/machines/force-systemd-flag-040357/force-systemd-flag-040357.rawdisk...
	I1217 09:30:09.208856  933313 main.go:143] libmachine: Writing magic tar header
	I1217 09:30:09.208881  933313 main.go:143] libmachine: Writing SSH key tar header
	I1217 09:30:09.208975  933313 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22182-893359/.minikube/machines/force-systemd-flag-040357 ...
	I1217 09:30:09.209061  933313 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22182-893359/.minikube/machines/force-systemd-flag-040357
	I1217 09:30:09.209091  933313 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22182-893359/.minikube/machines/force-systemd-flag-040357 (perms=drwx------)
	I1217 09:30:09.209111  933313 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22182-893359/.minikube/machines
	I1217 09:30:09.209139  933313 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22182-893359/.minikube/machines (perms=drwxr-xr-x)
	I1217 09:30:09.209154  933313 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 09:30:09.209165  933313 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22182-893359/.minikube (perms=drwxr-xr-x)
	I1217 09:30:09.209176  933313 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22182-893359
	I1217 09:30:09.209187  933313 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22182-893359 (perms=drwxrwxr-x)
	I1217 09:30:09.209201  933313 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1217 09:30:09.209216  933313 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1217 09:30:09.209231  933313 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1217 09:30:09.209245  933313 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1217 09:30:09.209259  933313 main.go:143] libmachine: checking permissions on dir: /home
	I1217 09:30:09.209268  933313 main.go:143] libmachine: skipping /home - not owner
	I1217 09:30:09.209272  933313 main.go:143] libmachine: defining domain...
	I1217 09:30:09.210921  933313 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>force-systemd-flag-040357</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22182-893359/.minikube/machines/force-systemd-flag-040357/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22182-893359/.minikube/machines/force-systemd-flag-040357/force-systemd-flag-040357.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-force-systemd-flag-040357'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1217 09:30:09.216854  933313 main.go:143] libmachine: domain force-systemd-flag-040357 has defined MAC address 52:54:00:16:4a:05 in network default
	I1217 09:30:09.217669  933313 main.go:143] libmachine: domain force-systemd-flag-040357 has defined MAC address 52:54:00:7b:8e:89 in network mk-force-systemd-flag-040357
	I1217 09:30:09.217702  933313 main.go:143] libmachine: starting domain...
	I1217 09:30:09.217710  933313 main.go:143] libmachine: ensuring networks are active...
	I1217 09:30:09.218573  933313 main.go:143] libmachine: Ensuring network default is active
	I1217 09:30:09.219078  933313 main.go:143] libmachine: Ensuring network mk-force-systemd-flag-040357 is active
	I1217 09:30:09.219868  933313 main.go:143] libmachine: getting domain XML...
	I1217 09:30:09.221175  933313 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>force-systemd-flag-040357</name>
	  <uuid>71d1245b-e2bd-4ebd-a9fb-bfac09a6017f</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22182-893359/.minikube/machines/force-systemd-flag-040357/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22182-893359/.minikube/machines/force-systemd-flag-040357/force-systemd-flag-040357.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:7b:8e:89'/>
	      <source network='mk-force-systemd-flag-040357'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:16:4a:05'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1217 09:30:08.570005  933201 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 09:30:08.570039  933201 machine.go:97] duration metric: took 6.352010416s to provisionDockerMachine
	I1217 09:30:08.570055  933201 start.go:293] postStartSetup for "pause-869559" (driver="kvm2")
	I1217 09:30:08.570069  933201 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 09:30:08.570135  933201 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 09:30:08.574341  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.576044  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:08.576107  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.576543  933201 sshutil.go:56] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/pause-869559/id_rsa Username:docker}
	I1217 09:30:08.660490  933201 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 09:30:08.666474  933201 info.go:137] Remote host: Buildroot 2025.02
	I1217 09:30:08.666521  933201 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-893359/.minikube/addons for local assets ...
	I1217 09:30:08.666614  933201 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-893359/.minikube/files for local assets ...
	I1217 09:30:08.666721  933201 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-893359/.minikube/files/etc/ssl/certs/8972772.pem -> 8972772.pem in /etc/ssl/certs
	I1217 09:30:08.666867  933201 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 09:30:08.680185  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/files/etc/ssl/certs/8972772.pem --> /etc/ssl/certs/8972772.pem (1708 bytes)
	I1217 09:30:08.714215  933201 start.go:296] duration metric: took 144.139489ms for postStartSetup
	I1217 09:30:08.714273  933201 fix.go:56] duration metric: took 6.500603085s for fixHost
	I1217 09:30:08.717448  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.717959  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:08.717991  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.718232  933201 main.go:143] libmachine: Using SSH client type: native
	I1217 09:30:08.718342  933201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1217 09:30:08.718356  933201 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 09:30:08.825023  933201 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765963808.815889037
	
	I1217 09:30:08.825052  933201 fix.go:216] guest clock: 1765963808.815889037
	I1217 09:30:08.825062  933201 fix.go:229] Guest: 2025-12-17 09:30:08.815889037 +0000 UTC Remote: 2025-12-17 09:30:08.714280264 +0000 UTC m=+7.657224610 (delta=101.608773ms)
	I1217 09:30:08.825084  933201 fix.go:200] guest clock delta is within tolerance: 101.608773ms
	I1217 09:30:08.825103  933201 start.go:83] releasing machines lock for "pause-869559", held for 6.611463696s
	I1217 09:30:08.828854  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.829411  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:08.829449  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.830142  933201 ssh_runner.go:195] Run: cat /version.json
	I1217 09:30:08.830218  933201 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 09:30:08.833957  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.834185  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.834471  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:08.834520  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.834691  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:08.834723  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.834719  933201 sshutil.go:56] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/pause-869559/id_rsa Username:docker}
	I1217 09:30:08.834985  933201 sshutil.go:56] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/pause-869559/id_rsa Username:docker}
	I1217 09:30:08.923887  933201 ssh_runner.go:195] Run: systemctl --version
	I1217 09:30:08.952615  933201 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 09:30:09.118289  933201 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 09:30:09.129662  933201 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 09:30:09.129750  933201 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 09:30:09.141994  933201 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 09:30:09.142026  933201 start.go:496] detecting cgroup driver to use...
	I1217 09:30:09.142119  933201 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 09:30:09.162845  933201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 09:30:09.181009  933201 docker.go:218] disabling cri-docker service (if available) ...
	I1217 09:30:09.181095  933201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 09:30:09.201333  933201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 09:30:09.218961  933201 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 09:30:09.412029  933201 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 09:30:09.589245  933201 docker.go:234] disabling docker service ...
	I1217 09:30:09.589327  933201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 09:30:09.623748  933201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 09:30:09.643179  933201 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 09:30:09.842329  933201 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 09:30:10.038112  933201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 09:30:10.055605  933201 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 09:30:10.082343  933201 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 09:30:10.082420  933201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:30:10.097457  933201 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 09:30:10.097567  933201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:30:10.114705  933201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:30:10.129082  933201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:30:10.142573  933201 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 09:30:10.157154  933201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:30:10.169859  933201 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:30:10.186318  933201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:30:10.203561  933201 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 09:30:10.214614  933201 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 09:30:10.225668  933201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 09:30:10.399705  933201 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 09:30:10.630565  933201 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 09:30:10.630665  933201 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 09:30:10.637597  933201 start.go:564] Will wait 60s for crictl version
	I1217 09:30:10.637667  933201 ssh_runner.go:195] Run: which crictl
	I1217 09:30:10.642067  933201 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 09:30:10.677915  933201 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 09:30:10.677992  933201 ssh_runner.go:195] Run: crio --version
	I1217 09:30:10.710746  933201 ssh_runner.go:195] Run: crio --version
	I1217 09:30:10.745685  933201 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1217 09:30:10.749691  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:10.750144  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:10.750176  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:10.750403  933201 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1217 09:30:10.755422  933201 kubeadm.go:884] updating cluster {Name:pause-869559 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3
ClusterName:pause-869559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 09:30:10.755630  933201 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 09:30:10.755703  933201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 09:30:10.797537  933201 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 09:30:10.797563  933201 crio.go:433] Images already preloaded, skipping extraction
	I1217 09:30:10.797618  933201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 09:30:10.835028  933201 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 09:30:10.835060  933201 cache_images.go:86] Images are preloaded, skipping loading
	I1217 09:30:10.835072  933201 kubeadm.go:935] updating node { 192.168.39.212 8443 v1.34.3 crio true true} ...
	I1217 09:30:10.835213  933201 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-869559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-869559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 09:30:10.835312  933201 ssh_runner.go:195] Run: crio config
	I1217 09:30:10.883111  933201 cni.go:84] Creating CNI manager for ""
	I1217 09:30:10.883150  933201 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 09:30:10.883171  933201 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 09:30:10.883193  933201 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-869559 NodeName:pause-869559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 09:30:10.883327  933201 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-869559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.212"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 09:30:10.883396  933201 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 09:30:10.897455  933201 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 09:30:10.897546  933201 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 09:30:10.909140  933201 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1217 09:30:10.931429  933201 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 09:30:10.952610  933201 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1217 09:30:10.974694  933201 ssh_runner.go:195] Run: grep 192.168.39.212	control-plane.minikube.internal$ /etc/hosts
	I1217 09:30:10.979493  933201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 09:30:11.438418  931969 api_server.go:269] stopped: https://192.168.61.151:8443/healthz: Get "https://192.168.61.151:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 09:30:11.438492  931969 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 09:30:11.438589  931969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 09:30:11.491489  931969 cri.go:89] found id: "e251798e70d4d460f82329b5387896274d0349650aaf071922a28a25fefb347d"
	I1217 09:30:11.491526  931969 cri.go:89] found id: "3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec"
	I1217 09:30:11.491533  931969 cri.go:89] found id: ""
	I1217 09:30:11.491543  931969 logs.go:282] 2 containers: [e251798e70d4d460f82329b5387896274d0349650aaf071922a28a25fefb347d 3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec]
	I1217 09:30:11.491609  931969 ssh_runner.go:195] Run: which crictl
	I1217 09:30:11.496290  931969 ssh_runner.go:195] Run: which crictl
	I1217 09:30:11.500674  931969 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 09:30:11.500728  931969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 09:30:11.556158  931969 cri.go:89] found id: "1ee8a08e83aee6fff7c1520688b72e31df5ebed846cf544dfe38126f5efb9219"
	I1217 09:30:11.556186  931969 cri.go:89] found id: ""
	I1217 09:30:11.556197  931969 logs.go:282] 1 containers: [1ee8a08e83aee6fff7c1520688b72e31df5ebed846cf544dfe38126f5efb9219]
	I1217 09:30:11.556268  931969 ssh_runner.go:195] Run: which crictl
	I1217 09:30:11.563672  931969 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 09:30:11.563748  931969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 09:30:11.616223  931969 cri.go:89] found id: "0b7d452f773b72aa05a84e4d57403ec44becfd48820d58c8c5a44daf5a9f80b2"
	I1217 09:30:11.616323  931969 cri.go:89] found id: ""
	I1217 09:30:11.616338  931969 logs.go:282] 1 containers: [0b7d452f773b72aa05a84e4d57403ec44becfd48820d58c8c5a44daf5a9f80b2]
	I1217 09:30:11.616411  931969 ssh_runner.go:195] Run: which crictl
	I1217 09:30:11.621936  931969 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 09:30:11.622004  931969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 09:30:11.664412  931969 cri.go:89] found id: "c7c13c202977c416a0db7f3a2551539e274763fddf8a36322d2ebfdfda7458f7"
	I1217 09:30:11.664440  931969 cri.go:89] found id: "ef5e73187ba4577410b93b3f14b58a7d1aca6215840afd6e49e8bd0424156e7a"
	I1217 09:30:11.664447  931969 cri.go:89] found id: ""
	I1217 09:30:11.664456  931969 logs.go:282] 2 containers: [c7c13c202977c416a0db7f3a2551539e274763fddf8a36322d2ebfdfda7458f7 ef5e73187ba4577410b93b3f14b58a7d1aca6215840afd6e49e8bd0424156e7a]
	I1217 09:30:11.664536  931969 ssh_runner.go:195] Run: which crictl
	I1217 09:30:11.669683  931969 ssh_runner.go:195] Run: which crictl
	I1217 09:30:11.675453  931969 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 09:30:11.675565  931969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 09:30:11.729833  931969 cri.go:89] found id: "c1a23713499eee9c1fabad66e845c1b59aa0019ae73656c31a329a694e0d4a2d"
	I1217 09:30:11.729871  931969 cri.go:89] found id: ""
	I1217 09:30:11.729884  931969 logs.go:282] 1 containers: [c1a23713499eee9c1fabad66e845c1b59aa0019ae73656c31a329a694e0d4a2d]
	I1217 09:30:11.729959  931969 ssh_runner.go:195] Run: which crictl
	I1217 09:30:11.735525  931969 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 09:30:11.735606  931969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 09:30:11.778777  931969 cri.go:89] found id: "d9725832b22843dec07a6141fe80d650192afc343f6197a2841406289eae3b16"
	I1217 09:30:11.778811  931969 cri.go:89] found id: "644136c1a2eafa8b99be19afd52abb3b4407e1fa0cd5d035c034faade87c2e41"
	I1217 09:30:11.778818  931969 cri.go:89] found id: ""
	I1217 09:30:11.778828  931969 logs.go:282] 2 containers: [d9725832b22843dec07a6141fe80d650192afc343f6197a2841406289eae3b16 644136c1a2eafa8b99be19afd52abb3b4407e1fa0cd5d035c034faade87c2e41]
	I1217 09:30:11.778898  931969 ssh_runner.go:195] Run: which crictl
	I1217 09:30:11.785184  931969 ssh_runner.go:195] Run: which crictl
	I1217 09:30:11.790560  931969 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 09:30:11.790645  931969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 09:30:11.846696  931969 cri.go:89] found id: ""
	I1217 09:30:11.846735  931969 logs.go:282] 0 containers: []
	W1217 09:30:11.846750  931969 logs.go:284] No container was found matching "kindnet"
	I1217 09:30:11.846759  931969 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 09:30:11.846844  931969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 09:30:11.893281  931969 cri.go:89] found id: "76338f44d08ab670fac5aa5eeeeb89ac90ba80194e50cc8eed11133a23a2d0d1"
	I1217 09:30:11.893313  931969 cri.go:89] found id: ""
	I1217 09:30:11.893325  931969 logs.go:282] 1 containers: [76338f44d08ab670fac5aa5eeeeb89ac90ba80194e50cc8eed11133a23a2d0d1]
	I1217 09:30:11.893395  931969 ssh_runner.go:195] Run: which crictl
	I1217 09:30:11.899666  931969 logs.go:123] Gathering logs for kube-scheduler [ef5e73187ba4577410b93b3f14b58a7d1aca6215840afd6e49e8bd0424156e7a] ...
	I1217 09:30:11.899695  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef5e73187ba4577410b93b3f14b58a7d1aca6215840afd6e49e8bd0424156e7a"
	I1217 09:30:11.952414  931969 logs.go:123] Gathering logs for kube-proxy [c1a23713499eee9c1fabad66e845c1b59aa0019ae73656c31a329a694e0d4a2d] ...
	I1217 09:30:11.952445  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1a23713499eee9c1fabad66e845c1b59aa0019ae73656c31a329a694e0d4a2d"
	I1217 09:30:11.996635  931969 logs.go:123] Gathering logs for container status ...
	I1217 09:30:11.996671  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 09:30:12.054044  931969 logs.go:123] Gathering logs for storage-provisioner [76338f44d08ab670fac5aa5eeeeb89ac90ba80194e50cc8eed11133a23a2d0d1] ...
	I1217 09:30:12.054086  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76338f44d08ab670fac5aa5eeeeb89ac90ba80194e50cc8eed11133a23a2d0d1"
	I1217 09:30:12.095570  931969 logs.go:123] Gathering logs for CRI-O ...
	I1217 09:30:12.095603  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 09:30:12.505218  931969 logs.go:123] Gathering logs for dmesg ...
	I1217 09:30:12.505255  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 09:30:12.526859  931969 logs.go:123] Gathering logs for kube-apiserver [e251798e70d4d460f82329b5387896274d0349650aaf071922a28a25fefb347d] ...
	I1217 09:30:12.526904  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e251798e70d4d460f82329b5387896274d0349650aaf071922a28a25fefb347d"
	I1217 09:30:12.582559  931969 logs.go:123] Gathering logs for kube-apiserver [3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec] ...
	I1217 09:30:12.582601  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec"
	W1217 09:30:12.635210  931969 logs.go:130] failed kube-apiserver [3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec": Process exited with status 1
	stdout:
	
	stderr:
	E1217 09:30:12.614873    2981 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec\": container with ID starting with 3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec not found: ID does not exist" containerID="3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec"
	time="2025-12-17T09:30:12Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec\": container with ID starting with 3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec not found: ID does not exist"
	 output: 
	** stderr ** 
	E1217 09:30:12.614873    2981 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec\": container with ID starting with 3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec not found: ID does not exist" containerID="3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec"
	time="2025-12-17T09:30:12Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec\": container with ID starting with 3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec not found: ID does not exist"
	
	** /stderr **
	I1217 09:30:12.635237  931969 logs.go:123] Gathering logs for coredns [0b7d452f773b72aa05a84e4d57403ec44becfd48820d58c8c5a44daf5a9f80b2] ...
	I1217 09:30:12.635256  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b7d452f773b72aa05a84e4d57403ec44becfd48820d58c8c5a44daf5a9f80b2"
	I1217 09:30:12.687799  931969 logs.go:123] Gathering logs for kube-scheduler [c7c13c202977c416a0db7f3a2551539e274763fddf8a36322d2ebfdfda7458f7] ...
	I1217 09:30:12.687843  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7c13c202977c416a0db7f3a2551539e274763fddf8a36322d2ebfdfda7458f7"
	I1217 09:30:12.772439  931969 logs.go:123] Gathering logs for describe nodes ...
	I1217 09:30:12.772483  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 09:30:12.879283  931969 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 09:30:12.879315  931969 logs.go:123] Gathering logs for etcd [1ee8a08e83aee6fff7c1520688b72e31df5ebed846cf544dfe38126f5efb9219] ...
	I1217 09:30:12.879337  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ee8a08e83aee6fff7c1520688b72e31df5ebed846cf544dfe38126f5efb9219"
	I1217 09:30:12.939277  931969 logs.go:123] Gathering logs for kube-controller-manager [d9725832b22843dec07a6141fe80d650192afc343f6197a2841406289eae3b16] ...
	I1217 09:30:12.939335  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9725832b22843dec07a6141fe80d650192afc343f6197a2841406289eae3b16"
	I1217 09:30:12.982770  931969 logs.go:123] Gathering logs for kube-controller-manager [644136c1a2eafa8b99be19afd52abb3b4407e1fa0cd5d035c034faade87c2e41] ...
	I1217 09:30:12.982817  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 644136c1a2eafa8b99be19afd52abb3b4407e1fa0cd5d035c034faade87c2e41"
	I1217 09:30:13.030162  931969 logs.go:123] Gathering logs for kubelet ...
	I1217 09:30:13.030209  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 09:30:13.069491  931969 logs.go:138] Found kubelet problem: Dec 17 09:28:29 running-upgrade-879489 kubelet[1253]: I1217 09:28:29.703410    1253 status_manager.go:890] "Failed to get status for pod" podUID="4f726c22-c99a-48ae-91e1-bc389f05d70b" pod="kube-system/storage-provisioner" err="pods \"storage-provisioner\" is forbidden: User \"system:node:running-upgrade-879489\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-879489' and this object"
	I1217 09:30:13.157446  931969 out.go:374] Setting ErrFile to fd 2...
	I1217 09:30:13.157487  931969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1217 09:30:13.157578  931969 out.go:285] X Problems detected in kubelet:
	W1217 09:30:13.157596  931969 out.go:285]   Dec 17 09:28:29 running-upgrade-879489 kubelet[1253]: I1217 09:28:29.703410    1253 status_manager.go:890] "Failed to get status for pod" podUID="4f726c22-c99a-48ae-91e1-bc389f05d70b" pod="kube-system/storage-provisioner" err="pods \"storage-provisioner\" is forbidden: User \"system:node:running-upgrade-879489\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-879489' and this object"
	I1217 09:30:13.157605  931969 out.go:374] Setting ErrFile to fd 2...
	I1217 09:30:13.157617  931969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:30:10.684936  933313 main.go:143] libmachine: waiting for domain to start...
	I1217 09:30:10.686560  933313 main.go:143] libmachine: domain is now running
	I1217 09:30:10.686577  933313 main.go:143] libmachine: waiting for IP...
	I1217 09:30:10.687394  933313 main.go:143] libmachine: domain force-systemd-flag-040357 has defined MAC address 52:54:00:7b:8e:89 in network mk-force-systemd-flag-040357
	I1217 09:30:10.688054  933313 main.go:143] libmachine: no network interface addresses found for domain force-systemd-flag-040357 (source=lease)
	I1217 09:30:10.688072  933313 main.go:143] libmachine: trying to list again with source=arp
	I1217 09:30:10.688409  933313 main.go:143] libmachine: unable to find current IP address of domain force-systemd-flag-040357 in network mk-force-systemd-flag-040357 (interfaces detected: [])
	I1217 09:30:10.688462  933313 retry.go:31] will retry after 276.471752ms: waiting for domain to come up
	I1217 09:30:10.967099  933313 main.go:143] libmachine: domain force-systemd-flag-040357 has defined MAC address 52:54:00:7b:8e:89 in network mk-force-systemd-flag-040357
	I1217 09:30:10.967935  933313 main.go:143] libmachine: no network interface addresses found for domain force-systemd-flag-040357 (source=lease)
	I1217 09:30:10.967951  933313 main.go:143] libmachine: trying to list again with source=arp
	I1217 09:30:10.968434  933313 main.go:143] libmachine: unable to find current IP address of domain force-systemd-flag-040357 in network mk-force-systemd-flag-040357 (interfaces detected: [])
	I1217 09:30:10.968479  933313 retry.go:31] will retry after 312.692232ms: waiting for domain to come up
	I1217 09:30:11.283186  933313 main.go:143] libmachine: domain force-systemd-flag-040357 has defined MAC address 52:54:00:7b:8e:89 in network mk-force-systemd-flag-040357
	I1217 09:30:11.284164  933313 main.go:143] libmachine: no network interface addresses found for domain force-systemd-flag-040357 (source=lease)
	I1217 09:30:11.284191  933313 main.go:143] libmachine: trying to list again with source=arp
	I1217 09:30:11.284721  933313 main.go:143] libmachine: unable to find current IP address of domain force-systemd-flag-040357 in network mk-force-systemd-flag-040357 (interfaces detected: [])
	I1217 09:30:11.284773  933313 retry.go:31] will retry after 340.039444ms: waiting for domain to come up
	I1217 09:30:11.626297  933313 main.go:143] libmachine: domain force-systemd-flag-040357 has defined MAC address 52:54:00:7b:8e:89 in network mk-force-systemd-flag-040357
	I1217 09:30:11.627137  933313 main.go:143] libmachine: no network interface addresses found for domain force-systemd-flag-040357 (source=lease)
	I1217 09:30:11.627155  933313 main.go:143] libmachine: trying to list again with source=arp
	I1217 09:30:11.627639  933313 main.go:143] libmachine: unable to find current IP address of domain force-systemd-flag-040357 in network mk-force-systemd-flag-040357 (interfaces detected: [])
	I1217 09:30:11.627708  933313 retry.go:31] will retry after 591.270059ms: waiting for domain to come up
	I1217 09:30:12.220806  933313 main.go:143] libmachine: domain force-systemd-flag-040357 has defined MAC address 52:54:00:7b:8e:89 in network mk-force-systemd-flag-040357
	I1217 09:30:12.221742  933313 main.go:143] libmachine: no network interface addresses found for domain force-systemd-flag-040357 (source=lease)
	I1217 09:30:12.221768  933313 main.go:143] libmachine: trying to list again with source=arp
	I1217 09:30:12.222220  933313 main.go:143] libmachine: unable to find current IP address of domain force-systemd-flag-040357 in network mk-force-systemd-flag-040357 (interfaces detected: [])
	I1217 09:30:12.222269  933313 retry.go:31] will retry after 571.901863ms: waiting for domain to come up
	I1217 09:30:12.796733  933313 main.go:143] libmachine: domain force-systemd-flag-040357 has defined MAC address 52:54:00:7b:8e:89 in network mk-force-systemd-flag-040357
	I1217 09:30:12.797810  933313 main.go:143] libmachine: no network interface addresses found for domain force-systemd-flag-040357 (source=lease)
	I1217 09:30:12.797842  933313 main.go:143] libmachine: trying to list again with source=arp
	I1217 09:30:12.798282  933313 main.go:143] libmachine: unable to find current IP address of domain force-systemd-flag-040357 in network mk-force-systemd-flag-040357 (interfaces detected: [])
	I1217 09:30:12.798331  933313 retry.go:31] will retry after 809.844209ms: waiting for domain to come up
	I1217 09:30:13.609247  933313 main.go:143] libmachine: domain force-systemd-flag-040357 has defined MAC address 52:54:00:7b:8e:89 in network mk-force-systemd-flag-040357
	I1217 09:30:13.609908  933313 main.go:143] libmachine: no network interface addresses found for domain force-systemd-flag-040357 (source=lease)
	I1217 09:30:13.609928  933313 main.go:143] libmachine: trying to list again with source=arp
	I1217 09:30:13.610265  933313 main.go:143] libmachine: unable to find current IP address of domain force-systemd-flag-040357 in network mk-force-systemd-flag-040357 (interfaces detected: [])
	I1217 09:30:13.610305  933313 retry.go:31] will retry after 1.026301637s: waiting for domain to come up
	I1217 09:30:14.638534  933313 main.go:143] libmachine: domain force-systemd-flag-040357 has defined MAC address 52:54:00:7b:8e:89 in network mk-force-systemd-flag-040357
	I1217 09:30:14.639129  933313 main.go:143] libmachine: no network interface addresses found for domain force-systemd-flag-040357 (source=lease)
	I1217 09:30:14.639145  933313 main.go:143] libmachine: trying to list again with source=arp
	I1217 09:30:14.639464  933313 main.go:143] libmachine: unable to find current IP address of domain force-systemd-flag-040357 in network mk-force-systemd-flag-040357 (interfaces detected: [])
	I1217 09:30:14.639528  933313 retry.go:31] will retry after 1.465870285s: waiting for domain to come up
	I1217 09:30:11.163980  933201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 09:30:11.188259  933201 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559 for IP: 192.168.39.212
	I1217 09:30:11.188285  933201 certs.go:195] generating shared ca certs ...
	I1217 09:30:11.188305  933201 certs.go:227] acquiring lock for ca certs: {Name:mk9975fd3c0c6324a63f90fa6e20c46f3034e6ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 09:30:11.188473  933201 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-893359/.minikube/ca.key
	I1217 09:30:11.188561  933201 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-893359/.minikube/proxy-client-ca.key
	I1217 09:30:11.188585  933201 certs.go:257] generating profile certs ...
	I1217 09:30:11.188707  933201 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/client.key
	I1217 09:30:11.188802  933201 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/apiserver.key.873948f8
	I1217 09:30:11.188878  933201 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/proxy-client.key
	I1217 09:30:11.189026  933201 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/897277.pem (1338 bytes)
	W1217 09:30:11.189071  933201 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-893359/.minikube/certs/897277_empty.pem, impossibly tiny 0 bytes
	I1217 09:30:11.189087  933201 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 09:30:11.189132  933201 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem (1078 bytes)
	I1217 09:30:11.189179  933201 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/cert.pem (1123 bytes)
	I1217 09:30:11.189218  933201 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/key.pem (1675 bytes)
	I1217 09:30:11.189281  933201 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/files/etc/ssl/certs/8972772.pem (1708 bytes)
	I1217 09:30:11.189920  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 09:30:11.223103  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 09:30:11.258163  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 09:30:11.290700  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 09:30:11.336294  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 09:30:11.373412  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 09:30:11.407240  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 09:30:11.440057  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 09:30:11.481401  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 09:30:11.606882  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/certs/897277.pem --> /usr/share/ca-certificates/897277.pem (1338 bytes)
	I1217 09:30:11.692782  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/files/etc/ssl/certs/8972772.pem --> /usr/share/ca-certificates/8972772.pem (1708 bytes)
	I1217 09:30:11.787594  933201 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 09:30:11.868730  933201 ssh_runner.go:195] Run: openssl version
	I1217 09:30:11.884874  933201 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 09:30:11.949201  933201 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 09:30:12.020522  933201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 09:30:12.037145  933201 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 08:16 /usr/share/ca-certificates/minikubeCA.pem
	I1217 09:30:12.037251  933201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 09:30:12.053797  933201 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 09:30:12.092653  933201 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/897277.pem
	I1217 09:30:12.126174  933201 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/897277.pem /etc/ssl/certs/897277.pem
	I1217 09:30:12.159722  933201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/897277.pem
	I1217 09:30:12.176614  933201 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 08:35 /usr/share/ca-certificates/897277.pem
	I1217 09:30:12.176720  933201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/897277.pem
	I1217 09:30:12.203040  933201 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 09:30:12.223867  933201 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8972772.pem
	I1217 09:30:12.248084  933201 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8972772.pem /etc/ssl/certs/8972772.pem
	I1217 09:30:12.274635  933201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8972772.pem
	I1217 09:30:12.292395  933201 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 08:35 /usr/share/ca-certificates/8972772.pem
	I1217 09:30:12.292463  933201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8972772.pem
	I1217 09:30:12.314704  933201 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 09:30:12.347685  933201 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 09:30:12.356735  933201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 09:30:12.373390  933201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 09:30:12.402432  933201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 09:30:12.421754  933201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 09:30:12.441110  933201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 09:30:12.471578  933201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 09:30:12.496274  933201 kubeadm.go:401] StartCluster: {Name:pause-869559 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 Cl
usterName:pause-869559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 09:30:12.496454  933201 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 09:30:12.496546  933201 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 09:30:12.625666  933201 cri.go:89] found id: "e0337c6439f8b8fce3bb0b66c3433d27794c3f9fa268ba8b7117cdda47e7ac5b"
	I1217 09:30:12.625696  933201 cri.go:89] found id: "4f631c9821b5e83eed22adff990fd2581e6c144822df0e654d14bd419b364ac6"
	I1217 09:30:12.625703  933201 cri.go:89] found id: "e331a882f16a8c302e6391a48abd39817a2b42d9c81de9f1b744ae81e2a67ad7"
	I1217 09:30:12.625708  933201 cri.go:89] found id: "2abdf54511473a3eef2b8ef0906e02182f4eb5e5f0bb0c765961af5b82cfce71"
	I1217 09:30:12.625713  933201 cri.go:89] found id: "86f9b55f6bae4f650c85a2f7d60899240c9afcb2f5ab7b3a2a8a69519d939917"
	I1217 09:30:12.625718  933201 cri.go:89] found id: "47326fe04fc5bff2fe0eb071e7d7d76e2e37d6be23dcd9075195432501497e5e"
	I1217 09:30:12.625723  933201 cri.go:89] found id: "e06d3f3a0193a41ffac4306c738bb419c8aa15f440fec5d686635327ea6a97ed"
	I1217 09:30:12.625729  933201 cri.go:89] found id: ""
	I1217 09:30:12.625785  933201 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-869559 -n pause-869559
helpers_test.go:270: (dbg) Run:  kubectl --context pause-869559 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-869559 -n pause-869559
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-869559 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-869559 logs -n 25: (1.445324977s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-960765 sudo systemctl cat crio --no-pager                                                                                                         │ cilium-960765             │ jenkins │ v1.37.0 │ 17 Dec 25 09:27 UTC │                     │
	│ ssh     │ -p cilium-960765 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                               │ cilium-960765             │ jenkins │ v1.37.0 │ 17 Dec 25 09:27 UTC │                     │
	│ ssh     │ -p cilium-960765 sudo crio config                                                                                                                           │ cilium-960765             │ jenkins │ v1.37.0 │ 17 Dec 25 09:27 UTC │                     │
	│ delete  │ -p cilium-960765                                                                                                                                            │ cilium-960765             │ jenkins │ v1.37.0 │ 17 Dec 25 09:27 UTC │ 17 Dec 25 09:27 UTC │
	│ start   │ -p running-upgrade-879489 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                          │ running-upgrade-879489    │ jenkins │ v1.35.0 │ 17 Dec 25 09:27 UTC │ 17 Dec 25 09:28 UTC │
	│ start   │ -p NoKubernetes-229767 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                         │ NoKubernetes-229767       │ jenkins │ v1.37.0 │ 17 Dec 25 09:27 UTC │ 17 Dec 25 09:28 UTC │
	│ start   │ -p kubernetes-upgrade-317314 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                             │ kubernetes-upgrade-317314 │ jenkins │ v1.37.0 │ 17 Dec 25 09:27 UTC │                     │
	│ start   │ -p kubernetes-upgrade-317314 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                 │ kubernetes-upgrade-317314 │ jenkins │ v1.37.0 │ 17 Dec 25 09:27 UTC │ 17 Dec 25 09:28 UTC │
	│ stop    │ stopped-upgrade-916798 stop                                                                                                                                 │ stopped-upgrade-916798    │ jenkins │ v1.35.0 │ 17 Dec 25 09:28 UTC │ 17 Dec 25 09:28 UTC │
	│ start   │ -p stopped-upgrade-916798 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ stopped-upgrade-916798    │ jenkins │ v1.37.0 │ 17 Dec 25 09:28 UTC │ 17 Dec 25 09:28 UTC │
	│ delete  │ -p NoKubernetes-229767                                                                                                                                      │ NoKubernetes-229767       │ jenkins │ v1.37.0 │ 17 Dec 25 09:28 UTC │ 17 Dec 25 09:28 UTC │
	│ start   │ -p NoKubernetes-229767 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                         │ NoKubernetes-229767       │ jenkins │ v1.37.0 │ 17 Dec 25 09:28 UTC │ 17 Dec 25 09:29 UTC │
	│ start   │ -p running-upgrade-879489 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ running-upgrade-879489    │ jenkins │ v1.37.0 │ 17 Dec 25 09:28 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-317314                                                                                                                                │ kubernetes-upgrade-317314 │ jenkins │ v1.37.0 │ 17 Dec 25 09:28 UTC │ 17 Dec 25 09:28 UTC │
	│ start   │ -p pause-869559 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                     │ pause-869559              │ jenkins │ v1.37.0 │ 17 Dec 25 09:28 UTC │ 17 Dec 25 09:30 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-916798 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-916798    │ jenkins │ v1.37.0 │ 17 Dec 25 09:28 UTC │                     │
	│ delete  │ -p stopped-upgrade-916798                                                                                                                                   │ stopped-upgrade-916798    │ jenkins │ v1.37.0 │ 17 Dec 25 09:28 UTC │ 17 Dec 25 09:28 UTC │
	│ start   │ -p cert-expiration-779044 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                        │ cert-expiration-779044    │ jenkins │ v1.37.0 │ 17 Dec 25 09:28 UTC │ 17 Dec 25 09:30 UTC │
	│ ssh     │ -p NoKubernetes-229767 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-229767       │ jenkins │ v1.37.0 │ 17 Dec 25 09:29 UTC │                     │
	│ stop    │ -p NoKubernetes-229767                                                                                                                                      │ NoKubernetes-229767       │ jenkins │ v1.37.0 │ 17 Dec 25 09:29 UTC │ 17 Dec 25 09:29 UTC │
	│ start   │ -p NoKubernetes-229767 --driver=kvm2  --container-runtime=crio                                                                                              │ NoKubernetes-229767       │ jenkins │ v1.37.0 │ 17 Dec 25 09:29 UTC │ 17 Dec 25 09:30 UTC │
	│ start   │ -p pause-869559 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-869559              │ jenkins │ v1.37.0 │ 17 Dec 25 09:30 UTC │ 17 Dec 25 09:30 UTC │
	│ ssh     │ -p NoKubernetes-229767 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-229767       │ jenkins │ v1.37.0 │ 17 Dec 25 09:30 UTC │                     │
	│ delete  │ -p NoKubernetes-229767                                                                                                                                      │ NoKubernetes-229767       │ jenkins │ v1.37.0 │ 17 Dec 25 09:30 UTC │ 17 Dec 25 09:30 UTC │
	│ start   │ -p force-systemd-flag-040357 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                   │ force-systemd-flag-040357 │ jenkins │ v1.37.0 │ 17 Dec 25 09:30 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 09:30:05
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 09:30:05.256531  933313 out.go:360] Setting OutFile to fd 1 ...
	I1217 09:30:05.256649  933313 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:30:05.256658  933313 out.go:374] Setting ErrFile to fd 2...
	I1217 09:30:05.256662  933313 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:30:05.256843  933313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 09:30:05.257322  933313 out.go:368] Setting JSON to false
	I1217 09:30:05.258252  933313 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":15151,"bootTime":1765948654,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 09:30:05.258305  933313 start.go:143] virtualization: kvm guest
	I1217 09:30:05.260413  933313 out.go:179] * [force-systemd-flag-040357] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 09:30:05.261619  933313 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 09:30:05.261611  933313 notify.go:221] Checking for updates...
	I1217 09:30:05.263866  933313 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 09:30:05.265066  933313 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	I1217 09:30:05.266089  933313 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 09:30:05.267216  933313 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 09:30:05.268361  933313 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 09:30:05.270088  933313 config.go:182] Loaded profile config "cert-expiration-779044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 09:30:05.270313  933313 config.go:182] Loaded profile config "pause-869559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 09:30:05.270455  933313 config.go:182] Loaded profile config "running-upgrade-879489": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1217 09:30:05.270590  933313 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 09:30:05.305722  933313 out.go:179] * Using the kvm2 driver based on user configuration
	I1217 09:30:05.306712  933313 start.go:309] selected driver: kvm2
	I1217 09:30:05.306733  933313 start.go:927] validating driver "kvm2" against <nil>
	I1217 09:30:05.306747  933313 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 09:30:05.307454  933313 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 09:30:05.307735  933313 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 09:30:05.307774  933313 cni.go:84] Creating CNI manager for ""
	I1217 09:30:05.307820  933313 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 09:30:05.307830  933313 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 09:30:05.307869  933313 start.go:353] cluster config:
	{Name:force-systemd-flag-040357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:force-systemd-flag-040357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 09:30:05.307959  933313 iso.go:125] acquiring lock: {Name:mk258687bf3be9c6817f84af5b9e08a4f47b5420 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 09:30:05.309328  933313 out.go:179] * Starting "force-systemd-flag-040357" primary control-plane node in "force-systemd-flag-040357" cluster
	I1217 09:30:02.217982  933201 out.go:252] * Updating the running kvm2 "pause-869559" VM ...
	I1217 09:30:02.218017  933201 machine.go:94] provisionDockerMachine start ...
	I1217 09:30:02.222109  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.222619  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:02.222651  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.222907  933201 main.go:143] libmachine: Using SSH client type: native
	I1217 09:30:02.223032  933201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1217 09:30:02.223047  933201 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 09:30:02.344762  933201 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-869559
	
	I1217 09:30:02.344809  933201 buildroot.go:166] provisioning hostname "pause-869559"
	I1217 09:30:02.348306  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.348882  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:02.348922  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.349141  933201 main.go:143] libmachine: Using SSH client type: native
	I1217 09:30:02.349257  933201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1217 09:30:02.349274  933201 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-869559 && echo "pause-869559" | sudo tee /etc/hostname
	I1217 09:30:02.479333  933201 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-869559
	
	I1217 09:30:02.482491  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.483012  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:02.483051  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.483239  933201 main.go:143] libmachine: Using SSH client type: native
	I1217 09:30:02.483338  933201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1217 09:30:02.483361  933201 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-869559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-869559/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-869559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 09:30:02.605593  933201 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 09:30:02.605630  933201 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22182-893359/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-893359/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-893359/.minikube}
	I1217 09:30:02.605667  933201 buildroot.go:174] setting up certificates
	I1217 09:30:02.605676  933201 provision.go:84] configureAuth start
	I1217 09:30:02.609261  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.609917  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:02.609954  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.613098  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.613551  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:02.613608  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.613769  933201 provision.go:143] copyHostCerts
	I1217 09:30:02.613834  933201 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-893359/.minikube/ca.pem, removing ...
	I1217 09:30:02.613855  933201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-893359/.minikube/ca.pem
	I1217 09:30:02.613923  933201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-893359/.minikube/ca.pem (1078 bytes)
	I1217 09:30:02.614062  933201 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-893359/.minikube/cert.pem, removing ...
	I1217 09:30:02.614075  933201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-893359/.minikube/cert.pem
	I1217 09:30:02.614109  933201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-893359/.minikube/cert.pem (1123 bytes)
	I1217 09:30:02.614189  933201 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-893359/.minikube/key.pem, removing ...
	I1217 09:30:02.614200  933201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-893359/.minikube/key.pem
	I1217 09:30:02.614230  933201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-893359/.minikube/key.pem (1675 bytes)
	I1217 09:30:02.614297  933201 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-893359/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca-key.pem org=jenkins.pause-869559 san=[127.0.0.1 192.168.39.212 localhost minikube pause-869559]
	I1217 09:30:02.784849  933201 provision.go:177] copyRemoteCerts
	I1217 09:30:02.784927  933201 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 09:30:02.788256  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.788861  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:02.788902  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.789126  933201 sshutil.go:56] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/pause-869559/id_rsa Username:docker}
	I1217 09:30:02.877674  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 09:30:02.910923  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 09:30:02.947310  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 09:30:02.984536  933201 provision.go:87] duration metric: took 378.825094ms to configureAuth
	I1217 09:30:02.984573  933201 buildroot.go:189] setting minikube options for container-runtime
	I1217 09:30:02.984890  933201 config.go:182] Loaded profile config "pause-869559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 09:30:02.988586  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.989118  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:02.989159  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:02.989434  933201 main.go:143] libmachine: Using SSH client type: native
	I1217 09:30:02.989590  933201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1217 09:30:02.989608  933201 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 09:30:06.438127  931969 api_server.go:269] stopped: https://192.168.61.151:8443/healthz: Get "https://192.168.61.151:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 09:30:06.438181  931969 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1217 09:30:05.310422  933313 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 09:30:05.310459  933313 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-893359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 09:30:05.310475  933313 cache.go:65] Caching tarball of preloaded images
	I1217 09:30:05.310583  933313 preload.go:238] Found /home/jenkins/minikube-integration/22182-893359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 09:30:05.310597  933313 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 09:30:05.310690  933313 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/force-systemd-flag-040357/config.json ...
	I1217 09:30:05.310708  933313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/force-systemd-flag-040357/config.json: {Name:mk3e7b8f9ee06c6e6da563d4ef34958d4516e0d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 09:30:05.310849  933313 start.go:360] acquireMachinesLock for force-systemd-flag-040357: {Name:mkdc91ccb2d66cdada71da88e972b4d333b7f63c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 09:30:08.825242  933313 start.go:364] duration metric: took 3.514317001s to acquireMachinesLock for "force-systemd-flag-040357"
	I1217 09:30:08.825324  933313 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-040357 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.34.3 ClusterName:force-systemd-flag-040357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 09:30:08.825465  933313 start.go:125] createHost starting for "" (driver="kvm2")
	I1217 09:30:08.827092  933313 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1217 09:30:08.827376  933313 start.go:159] libmachine.API.Create for "force-systemd-flag-040357" (driver="kvm2")
	I1217 09:30:08.827428  933313 client.go:173] LocalClient.Create starting
	I1217 09:30:08.827523  933313 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem
	I1217 09:30:08.827575  933313 main.go:143] libmachine: Decoding PEM data...
	I1217 09:30:08.827607  933313 main.go:143] libmachine: Parsing certificate...
	I1217 09:30:08.827697  933313 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-893359/.minikube/certs/cert.pem
	I1217 09:30:08.827742  933313 main.go:143] libmachine: Decoding PEM data...
	I1217 09:30:08.827762  933313 main.go:143] libmachine: Parsing certificate...
	I1217 09:30:08.828250  933313 main.go:143] libmachine: creating domain...
	I1217 09:30:08.828270  933313 main.go:143] libmachine: creating network...
	I1217 09:30:08.830146  933313 main.go:143] libmachine: found existing default network
	I1217 09:30:08.830662  933313 main.go:143] libmachine: <network connections='3'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 09:30:08.832098  933313 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ec:45:aa} reservation:<nil>}
	I1217 09:30:08.833195  933313 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:32:78:b7} reservation:<nil>}
	I1217 09:30:08.833964  933313 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:9e:dc:2a} reservation:<nil>}
	I1217 09:30:08.835114  933313 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d2e3d0}
	I1217 09:30:08.835222  933313 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-force-systemd-flag-040357</name>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 09:30:08.841379  933313 main.go:143] libmachine: creating private network mk-force-systemd-flag-040357 192.168.72.0/24...
	I1217 09:30:08.926819  933313 main.go:143] libmachine: private network mk-force-systemd-flag-040357 192.168.72.0/24 created
	I1217 09:30:08.927174  933313 main.go:143] libmachine: <network>
	  <name>mk-force-systemd-flag-040357</name>
	  <uuid>8e3703a1-1a83-42e1-a5fe-10cba9c7e5b5</uuid>
	  <bridge name='virbr4' stp='on' delay='0'/>
	  <mac address='52:54:00:a8:24:57'/>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 09:30:08.927229  933313 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22182-893359/.minikube/machines/force-systemd-flag-040357 ...
	I1217 09:30:08.927273  933313 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22182-893359/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso
	I1217 09:30:08.927286  933313 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 09:30:08.927390  933313 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22182-893359/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22182-893359/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso...
	I1217 09:30:09.207315  933313 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22182-893359/.minikube/machines/force-systemd-flag-040357/id_rsa...
	I1217 09:30:09.208821  933313 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22182-893359/.minikube/machines/force-systemd-flag-040357/force-systemd-flag-040357.rawdisk...
	I1217 09:30:09.208856  933313 main.go:143] libmachine: Writing magic tar header
	I1217 09:30:09.208881  933313 main.go:143] libmachine: Writing SSH key tar header
	I1217 09:30:09.208975  933313 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22182-893359/.minikube/machines/force-systemd-flag-040357 ...
	I1217 09:30:09.209061  933313 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22182-893359/.minikube/machines/force-systemd-flag-040357
	I1217 09:30:09.209091  933313 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22182-893359/.minikube/machines/force-systemd-flag-040357 (perms=drwx------)
	I1217 09:30:09.209111  933313 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22182-893359/.minikube/machines
	I1217 09:30:09.209139  933313 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22182-893359/.minikube/machines (perms=drwxr-xr-x)
	I1217 09:30:09.209154  933313 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 09:30:09.209165  933313 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22182-893359/.minikube (perms=drwxr-xr-x)
	I1217 09:30:09.209176  933313 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22182-893359
	I1217 09:30:09.209187  933313 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22182-893359 (perms=drwxrwxr-x)
	I1217 09:30:09.209201  933313 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1217 09:30:09.209216  933313 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1217 09:30:09.209231  933313 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1217 09:30:09.209245  933313 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1217 09:30:09.209259  933313 main.go:143] libmachine: checking permissions on dir: /home
	I1217 09:30:09.209268  933313 main.go:143] libmachine: skipping /home - not owner
	I1217 09:30:09.209272  933313 main.go:143] libmachine: defining domain...
	I1217 09:30:09.210921  933313 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>force-systemd-flag-040357</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22182-893359/.minikube/machines/force-systemd-flag-040357/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22182-893359/.minikube/machines/force-systemd-flag-040357/force-systemd-flag-040357.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-force-systemd-flag-040357'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1217 09:30:09.216854  933313 main.go:143] libmachine: domain force-systemd-flag-040357 has defined MAC address 52:54:00:16:4a:05 in network default
	I1217 09:30:09.217669  933313 main.go:143] libmachine: domain force-systemd-flag-040357 has defined MAC address 52:54:00:7b:8e:89 in network mk-force-systemd-flag-040357
	I1217 09:30:09.217702  933313 main.go:143] libmachine: starting domain...
	I1217 09:30:09.217710  933313 main.go:143] libmachine: ensuring networks are active...
	I1217 09:30:09.218573  933313 main.go:143] libmachine: Ensuring network default is active
	I1217 09:30:09.219078  933313 main.go:143] libmachine: Ensuring network mk-force-systemd-flag-040357 is active
	I1217 09:30:09.219868  933313 main.go:143] libmachine: getting domain XML...
	I1217 09:30:09.221175  933313 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>force-systemd-flag-040357</name>
	  <uuid>71d1245b-e2bd-4ebd-a9fb-bfac09a6017f</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22182-893359/.minikube/machines/force-systemd-flag-040357/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22182-893359/.minikube/machines/force-systemd-flag-040357/force-systemd-flag-040357.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:7b:8e:89'/>
	      <source network='mk-force-systemd-flag-040357'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:16:4a:05'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1217 09:30:08.570005  933201 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 09:30:08.570039  933201 machine.go:97] duration metric: took 6.352010416s to provisionDockerMachine
	I1217 09:30:08.570055  933201 start.go:293] postStartSetup for "pause-869559" (driver="kvm2")
	I1217 09:30:08.570069  933201 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 09:30:08.570135  933201 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 09:30:08.574341  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.576044  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:08.576107  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.576543  933201 sshutil.go:56] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/pause-869559/id_rsa Username:docker}
	I1217 09:30:08.660490  933201 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 09:30:08.666474  933201 info.go:137] Remote host: Buildroot 2025.02
	I1217 09:30:08.666521  933201 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-893359/.minikube/addons for local assets ...
	I1217 09:30:08.666614  933201 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-893359/.minikube/files for local assets ...
	I1217 09:30:08.666721  933201 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-893359/.minikube/files/etc/ssl/certs/8972772.pem -> 8972772.pem in /etc/ssl/certs
	I1217 09:30:08.666867  933201 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 09:30:08.680185  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/files/etc/ssl/certs/8972772.pem --> /etc/ssl/certs/8972772.pem (1708 bytes)
	I1217 09:30:08.714215  933201 start.go:296] duration metric: took 144.139489ms for postStartSetup
	I1217 09:30:08.714273  933201 fix.go:56] duration metric: took 6.500603085s for fixHost
	I1217 09:30:08.717448  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.717959  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:08.717991  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.718232  933201 main.go:143] libmachine: Using SSH client type: native
	I1217 09:30:08.718342  933201 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1217 09:30:08.718356  933201 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 09:30:08.825023  933201 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765963808.815889037
	
	I1217 09:30:08.825052  933201 fix.go:216] guest clock: 1765963808.815889037
	I1217 09:30:08.825062  933201 fix.go:229] Guest: 2025-12-17 09:30:08.815889037 +0000 UTC Remote: 2025-12-17 09:30:08.714280264 +0000 UTC m=+7.657224610 (delta=101.608773ms)
	I1217 09:30:08.825084  933201 fix.go:200] guest clock delta is within tolerance: 101.608773ms
	I1217 09:30:08.825103  933201 start.go:83] releasing machines lock for "pause-869559", held for 6.611463696s
	I1217 09:30:08.828854  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.829411  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:08.829449  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.830142  933201 ssh_runner.go:195] Run: cat /version.json
	I1217 09:30:08.830218  933201 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 09:30:08.833957  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.834185  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.834471  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:08.834520  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.834691  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:08.834723  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:08.834719  933201 sshutil.go:56] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/pause-869559/id_rsa Username:docker}
	I1217 09:30:08.834985  933201 sshutil.go:56] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/pause-869559/id_rsa Username:docker}
	I1217 09:30:08.923887  933201 ssh_runner.go:195] Run: systemctl --version
	I1217 09:30:08.952615  933201 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 09:30:09.118289  933201 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 09:30:09.129662  933201 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 09:30:09.129750  933201 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 09:30:09.141994  933201 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 09:30:09.142026  933201 start.go:496] detecting cgroup driver to use...
	I1217 09:30:09.142119  933201 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 09:30:09.162845  933201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 09:30:09.181009  933201 docker.go:218] disabling cri-docker service (if available) ...
	I1217 09:30:09.181095  933201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 09:30:09.201333  933201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 09:30:09.218961  933201 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 09:30:09.412029  933201 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 09:30:09.589245  933201 docker.go:234] disabling docker service ...
	I1217 09:30:09.589327  933201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 09:30:09.623748  933201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 09:30:09.643179  933201 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 09:30:09.842329  933201 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 09:30:10.038112  933201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 09:30:10.055605  933201 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 09:30:10.082343  933201 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 09:30:10.082420  933201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:30:10.097457  933201 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 09:30:10.097567  933201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:30:10.114705  933201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:30:10.129082  933201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:30:10.142573  933201 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 09:30:10.157154  933201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:30:10.169859  933201 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:30:10.186318  933201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 09:30:10.203561  933201 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 09:30:10.214614  933201 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 09:30:10.225668  933201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 09:30:10.399705  933201 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 09:30:10.630565  933201 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 09:30:10.630665  933201 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 09:30:10.637597  933201 start.go:564] Will wait 60s for crictl version
	I1217 09:30:10.637667  933201 ssh_runner.go:195] Run: which crictl
	I1217 09:30:10.642067  933201 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 09:30:10.677915  933201 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 09:30:10.677992  933201 ssh_runner.go:195] Run: crio --version
	I1217 09:30:10.710746  933201 ssh_runner.go:195] Run: crio --version
	I1217 09:30:10.745685  933201 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1217 09:30:10.749691  933201 main.go:143] libmachine: domain pause-869559 has defined MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:10.750144  933201 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d9:05:f8", ip: ""} in network mk-pause-869559: {Iface:virbr1 ExpiryTime:2025-12-17 10:29:20 +0000 UTC Type:0 Mac:52:54:00:d9:05:f8 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:pause-869559 Clientid:01:52:54:00:d9:05:f8}
	I1217 09:30:10.750176  933201 main.go:143] libmachine: domain pause-869559 has defined IP address 192.168.39.212 and MAC address 52:54:00:d9:05:f8 in network mk-pause-869559
	I1217 09:30:10.750403  933201 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1217 09:30:10.755422  933201 kubeadm.go:884] updating cluster {Name:pause-869559 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3
ClusterName:pause-869559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 09:30:10.755630  933201 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 09:30:10.755703  933201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 09:30:10.797537  933201 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 09:30:10.797563  933201 crio.go:433] Images already preloaded, skipping extraction
	I1217 09:30:10.797618  933201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 09:30:10.835028  933201 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 09:30:10.835060  933201 cache_images.go:86] Images are preloaded, skipping loading
	I1217 09:30:10.835072  933201 kubeadm.go:935] updating node { 192.168.39.212 8443 v1.34.3 crio true true} ...
	I1217 09:30:10.835213  933201 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-869559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-869559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 09:30:10.835312  933201 ssh_runner.go:195] Run: crio config
	I1217 09:30:10.883111  933201 cni.go:84] Creating CNI manager for ""
	I1217 09:30:10.883150  933201 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 09:30:10.883171  933201 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 09:30:10.883193  933201 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-869559 NodeName:pause-869559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 09:30:10.883327  933201 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-869559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.212"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 09:30:10.883396  933201 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 09:30:10.897455  933201 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 09:30:10.897546  933201 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 09:30:10.909140  933201 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1217 09:30:10.931429  933201 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 09:30:10.952610  933201 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1217 09:30:10.974694  933201 ssh_runner.go:195] Run: grep 192.168.39.212	control-plane.minikube.internal$ /etc/hosts
	I1217 09:30:10.979493  933201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 09:30:11.438418  931969 api_server.go:269] stopped: https://192.168.61.151:8443/healthz: Get "https://192.168.61.151:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 09:30:11.438492  931969 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 09:30:11.438589  931969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 09:30:11.491489  931969 cri.go:89] found id: "e251798e70d4d460f82329b5387896274d0349650aaf071922a28a25fefb347d"
	I1217 09:30:11.491526  931969 cri.go:89] found id: "3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec"
	I1217 09:30:11.491533  931969 cri.go:89] found id: ""
	I1217 09:30:11.491543  931969 logs.go:282] 2 containers: [e251798e70d4d460f82329b5387896274d0349650aaf071922a28a25fefb347d 3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec]
	I1217 09:30:11.491609  931969 ssh_runner.go:195] Run: which crictl
	I1217 09:30:11.496290  931969 ssh_runner.go:195] Run: which crictl
	I1217 09:30:11.500674  931969 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 09:30:11.500728  931969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 09:30:11.556158  931969 cri.go:89] found id: "1ee8a08e83aee6fff7c1520688b72e31df5ebed846cf544dfe38126f5efb9219"
	I1217 09:30:11.556186  931969 cri.go:89] found id: ""
	I1217 09:30:11.556197  931969 logs.go:282] 1 containers: [1ee8a08e83aee6fff7c1520688b72e31df5ebed846cf544dfe38126f5efb9219]
	I1217 09:30:11.556268  931969 ssh_runner.go:195] Run: which crictl
	I1217 09:30:11.563672  931969 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 09:30:11.563748  931969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 09:30:11.616223  931969 cri.go:89] found id: "0b7d452f773b72aa05a84e4d57403ec44becfd48820d58c8c5a44daf5a9f80b2"
	I1217 09:30:11.616323  931969 cri.go:89] found id: ""
	I1217 09:30:11.616338  931969 logs.go:282] 1 containers: [0b7d452f773b72aa05a84e4d57403ec44becfd48820d58c8c5a44daf5a9f80b2]
	I1217 09:30:11.616411  931969 ssh_runner.go:195] Run: which crictl
	I1217 09:30:11.621936  931969 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 09:30:11.622004  931969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 09:30:11.664412  931969 cri.go:89] found id: "c7c13c202977c416a0db7f3a2551539e274763fddf8a36322d2ebfdfda7458f7"
	I1217 09:30:11.664440  931969 cri.go:89] found id: "ef5e73187ba4577410b93b3f14b58a7d1aca6215840afd6e49e8bd0424156e7a"
	I1217 09:30:11.664447  931969 cri.go:89] found id: ""
	I1217 09:30:11.664456  931969 logs.go:282] 2 containers: [c7c13c202977c416a0db7f3a2551539e274763fddf8a36322d2ebfdfda7458f7 ef5e73187ba4577410b93b3f14b58a7d1aca6215840afd6e49e8bd0424156e7a]
	I1217 09:30:11.664536  931969 ssh_runner.go:195] Run: which crictl
	I1217 09:30:11.669683  931969 ssh_runner.go:195] Run: which crictl
	I1217 09:30:11.675453  931969 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 09:30:11.675565  931969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 09:30:11.729833  931969 cri.go:89] found id: "c1a23713499eee9c1fabad66e845c1b59aa0019ae73656c31a329a694e0d4a2d"
	I1217 09:30:11.729871  931969 cri.go:89] found id: ""
	I1217 09:30:11.729884  931969 logs.go:282] 1 containers: [c1a23713499eee9c1fabad66e845c1b59aa0019ae73656c31a329a694e0d4a2d]
	I1217 09:30:11.729959  931969 ssh_runner.go:195] Run: which crictl
	I1217 09:30:11.735525  931969 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 09:30:11.735606  931969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 09:30:11.778777  931969 cri.go:89] found id: "d9725832b22843dec07a6141fe80d650192afc343f6197a2841406289eae3b16"
	I1217 09:30:11.778811  931969 cri.go:89] found id: "644136c1a2eafa8b99be19afd52abb3b4407e1fa0cd5d035c034faade87c2e41"
	I1217 09:30:11.778818  931969 cri.go:89] found id: ""
	I1217 09:30:11.778828  931969 logs.go:282] 2 containers: [d9725832b22843dec07a6141fe80d650192afc343f6197a2841406289eae3b16 644136c1a2eafa8b99be19afd52abb3b4407e1fa0cd5d035c034faade87c2e41]
	I1217 09:30:11.778898  931969 ssh_runner.go:195] Run: which crictl
	I1217 09:30:11.785184  931969 ssh_runner.go:195] Run: which crictl
	I1217 09:30:11.790560  931969 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 09:30:11.790645  931969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 09:30:11.846696  931969 cri.go:89] found id: ""
	I1217 09:30:11.846735  931969 logs.go:282] 0 containers: []
	W1217 09:30:11.846750  931969 logs.go:284] No container was found matching "kindnet"
	I1217 09:30:11.846759  931969 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 09:30:11.846844  931969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 09:30:11.893281  931969 cri.go:89] found id: "76338f44d08ab670fac5aa5eeeeb89ac90ba80194e50cc8eed11133a23a2d0d1"
	I1217 09:30:11.893313  931969 cri.go:89] found id: ""
	I1217 09:30:11.893325  931969 logs.go:282] 1 containers: [76338f44d08ab670fac5aa5eeeeb89ac90ba80194e50cc8eed11133a23a2d0d1]
	I1217 09:30:11.893395  931969 ssh_runner.go:195] Run: which crictl
	I1217 09:30:11.899666  931969 logs.go:123] Gathering logs for kube-scheduler [ef5e73187ba4577410b93b3f14b58a7d1aca6215840afd6e49e8bd0424156e7a] ...
	I1217 09:30:11.899695  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef5e73187ba4577410b93b3f14b58a7d1aca6215840afd6e49e8bd0424156e7a"
	I1217 09:30:11.952414  931969 logs.go:123] Gathering logs for kube-proxy [c1a23713499eee9c1fabad66e845c1b59aa0019ae73656c31a329a694e0d4a2d] ...
	I1217 09:30:11.952445  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1a23713499eee9c1fabad66e845c1b59aa0019ae73656c31a329a694e0d4a2d"
	I1217 09:30:11.996635  931969 logs.go:123] Gathering logs for container status ...
	I1217 09:30:11.996671  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 09:30:12.054044  931969 logs.go:123] Gathering logs for storage-provisioner [76338f44d08ab670fac5aa5eeeeb89ac90ba80194e50cc8eed11133a23a2d0d1] ...
	I1217 09:30:12.054086  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76338f44d08ab670fac5aa5eeeeb89ac90ba80194e50cc8eed11133a23a2d0d1"
	I1217 09:30:12.095570  931969 logs.go:123] Gathering logs for CRI-O ...
	I1217 09:30:12.095603  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 09:30:12.505218  931969 logs.go:123] Gathering logs for dmesg ...
	I1217 09:30:12.505255  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 09:30:12.526859  931969 logs.go:123] Gathering logs for kube-apiserver [e251798e70d4d460f82329b5387896274d0349650aaf071922a28a25fefb347d] ...
	I1217 09:30:12.526904  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e251798e70d4d460f82329b5387896274d0349650aaf071922a28a25fefb347d"
	I1217 09:30:12.582559  931969 logs.go:123] Gathering logs for kube-apiserver [3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec] ...
	I1217 09:30:12.582601  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec"
	W1217 09:30:12.635210  931969 logs.go:130] failed kube-apiserver [3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec": Process exited with status 1
	stdout:
	
	stderr:
	E1217 09:30:12.614873    2981 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec\": container with ID starting with 3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec not found: ID does not exist" containerID="3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec"
	time="2025-12-17T09:30:12Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec\": container with ID starting with 3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec not found: ID does not exist"
	 output: 
	** stderr ** 
	E1217 09:30:12.614873    2981 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec\": container with ID starting with 3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec not found: ID does not exist" containerID="3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec"
	time="2025-12-17T09:30:12Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec\": container with ID starting with 3380bf4ddfdc5157cea6d903b5fa663ea2da36bae21c9618e161efe66d8399ec not found: ID does not exist"
	
	** /stderr **
	I1217 09:30:12.635237  931969 logs.go:123] Gathering logs for coredns [0b7d452f773b72aa05a84e4d57403ec44becfd48820d58c8c5a44daf5a9f80b2] ...
	I1217 09:30:12.635256  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b7d452f773b72aa05a84e4d57403ec44becfd48820d58c8c5a44daf5a9f80b2"
	I1217 09:30:12.687799  931969 logs.go:123] Gathering logs for kube-scheduler [c7c13c202977c416a0db7f3a2551539e274763fddf8a36322d2ebfdfda7458f7] ...
	I1217 09:30:12.687843  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7c13c202977c416a0db7f3a2551539e274763fddf8a36322d2ebfdfda7458f7"
	I1217 09:30:12.772439  931969 logs.go:123] Gathering logs for describe nodes ...
	I1217 09:30:12.772483  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 09:30:12.879283  931969 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 09:30:12.879315  931969 logs.go:123] Gathering logs for etcd [1ee8a08e83aee6fff7c1520688b72e31df5ebed846cf544dfe38126f5efb9219] ...
	I1217 09:30:12.879337  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ee8a08e83aee6fff7c1520688b72e31df5ebed846cf544dfe38126f5efb9219"
	I1217 09:30:12.939277  931969 logs.go:123] Gathering logs for kube-controller-manager [d9725832b22843dec07a6141fe80d650192afc343f6197a2841406289eae3b16] ...
	I1217 09:30:12.939335  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9725832b22843dec07a6141fe80d650192afc343f6197a2841406289eae3b16"
	I1217 09:30:12.982770  931969 logs.go:123] Gathering logs for kube-controller-manager [644136c1a2eafa8b99be19afd52abb3b4407e1fa0cd5d035c034faade87c2e41] ...
	I1217 09:30:12.982817  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 644136c1a2eafa8b99be19afd52abb3b4407e1fa0cd5d035c034faade87c2e41"
	I1217 09:30:13.030162  931969 logs.go:123] Gathering logs for kubelet ...
	I1217 09:30:13.030209  931969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 09:30:13.069491  931969 logs.go:138] Found kubelet problem: Dec 17 09:28:29 running-upgrade-879489 kubelet[1253]: I1217 09:28:29.703410    1253 status_manager.go:890] "Failed to get status for pod" podUID="4f726c22-c99a-48ae-91e1-bc389f05d70b" pod="kube-system/storage-provisioner" err="pods \"storage-provisioner\" is forbidden: User \"system:node:running-upgrade-879489\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-879489' and this object"
	I1217 09:30:13.157446  931969 out.go:374] Setting ErrFile to fd 2...
	I1217 09:30:13.157487  931969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1217 09:30:13.157578  931969 out.go:285] X Problems detected in kubelet:
	W1217 09:30:13.157596  931969 out.go:285]   Dec 17 09:28:29 running-upgrade-879489 kubelet[1253]: I1217 09:28:29.703410    1253 status_manager.go:890] "Failed to get status for pod" podUID="4f726c22-c99a-48ae-91e1-bc389f05d70b" pod="kube-system/storage-provisioner" err="pods \"storage-provisioner\" is forbidden: User \"system:node:running-upgrade-879489\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-879489' and this object"
	I1217 09:30:13.157605  931969 out.go:374] Setting ErrFile to fd 2...
	I1217 09:30:13.157617  931969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:30:10.684936  933313 main.go:143] libmachine: waiting for domain to start...
	I1217 09:30:10.686560  933313 main.go:143] libmachine: domain is now running
	I1217 09:30:10.686577  933313 main.go:143] libmachine: waiting for IP...
	I1217 09:30:10.687394  933313 main.go:143] libmachine: domain force-systemd-flag-040357 has defined MAC address 52:54:00:7b:8e:89 in network mk-force-systemd-flag-040357
	I1217 09:30:10.688054  933313 main.go:143] libmachine: no network interface addresses found for domain force-systemd-flag-040357 (source=lease)
	I1217 09:30:10.688072  933313 main.go:143] libmachine: trying to list again with source=arp
	I1217 09:30:10.688409  933313 main.go:143] libmachine: unable to find current IP address of domain force-systemd-flag-040357 in network mk-force-systemd-flag-040357 (interfaces detected: [])
	I1217 09:30:10.688462  933313 retry.go:31] will retry after 276.471752ms: waiting for domain to come up
	I1217 09:30:10.967099  933313 main.go:143] libmachine: domain force-systemd-flag-040357 has defined MAC address 52:54:00:7b:8e:89 in network mk-force-systemd-flag-040357
	I1217 09:30:10.967935  933313 main.go:143] libmachine: no network interface addresses found for domain force-systemd-flag-040357 (source=lease)
	I1217 09:30:10.967951  933313 main.go:143] libmachine: trying to list again with source=arp
	I1217 09:30:10.968434  933313 main.go:143] libmachine: unable to find current IP address of domain force-systemd-flag-040357 in network mk-force-systemd-flag-040357 (interfaces detected: [])
	I1217 09:30:10.968479  933313 retry.go:31] will retry after 312.692232ms: waiting for domain to come up
	I1217 09:30:11.283186  933313 main.go:143] libmachine: domain force-systemd-flag-040357 has defined MAC address 52:54:00:7b:8e:89 in network mk-force-systemd-flag-040357
	I1217 09:30:11.284164  933313 main.go:143] libmachine: no network interface addresses found for domain force-systemd-flag-040357 (source=lease)
	I1217 09:30:11.284191  933313 main.go:143] libmachine: trying to list again with source=arp
	I1217 09:30:11.284721  933313 main.go:143] libmachine: unable to find current IP address of domain force-systemd-flag-040357 in network mk-force-systemd-flag-040357 (interfaces detected: [])
	I1217 09:30:11.284773  933313 retry.go:31] will retry after 340.039444ms: waiting for domain to come up
	I1217 09:30:11.626297  933313 main.go:143] libmachine: domain force-systemd-flag-040357 has defined MAC address 52:54:00:7b:8e:89 in network mk-force-systemd-flag-040357
	I1217 09:30:11.627137  933313 main.go:143] libmachine: no network interface addresses found for domain force-systemd-flag-040357 (source=lease)
	I1217 09:30:11.627155  933313 main.go:143] libmachine: trying to list again with source=arp
	I1217 09:30:11.627639  933313 main.go:143] libmachine: unable to find current IP address of domain force-systemd-flag-040357 in network mk-force-systemd-flag-040357 (interfaces detected: [])
	I1217 09:30:11.627708  933313 retry.go:31] will retry after 591.270059ms: waiting for domain to come up
	I1217 09:30:12.220806  933313 main.go:143] libmachine: domain force-systemd-flag-040357 has defined MAC address 52:54:00:7b:8e:89 in network mk-force-systemd-flag-040357
	I1217 09:30:12.221742  933313 main.go:143] libmachine: no network interface addresses found for domain force-systemd-flag-040357 (source=lease)
	I1217 09:30:12.221768  933313 main.go:143] libmachine: trying to list again with source=arp
	I1217 09:30:12.222220  933313 main.go:143] libmachine: unable to find current IP address of domain force-systemd-flag-040357 in network mk-force-systemd-flag-040357 (interfaces detected: [])
	I1217 09:30:12.222269  933313 retry.go:31] will retry after 571.901863ms: waiting for domain to come up
	I1217 09:30:12.796733  933313 main.go:143] libmachine: domain force-systemd-flag-040357 has defined MAC address 52:54:00:7b:8e:89 in network mk-force-systemd-flag-040357
	I1217 09:30:12.797810  933313 main.go:143] libmachine: no network interface addresses found for domain force-systemd-flag-040357 (source=lease)
	I1217 09:30:12.797842  933313 main.go:143] libmachine: trying to list again with source=arp
	I1217 09:30:12.798282  933313 main.go:143] libmachine: unable to find current IP address of domain force-systemd-flag-040357 in network mk-force-systemd-flag-040357 (interfaces detected: [])
	I1217 09:30:12.798331  933313 retry.go:31] will retry after 809.844209ms: waiting for domain to come up
	I1217 09:30:13.609247  933313 main.go:143] libmachine: domain force-systemd-flag-040357 has defined MAC address 52:54:00:7b:8e:89 in network mk-force-systemd-flag-040357
	I1217 09:30:13.609908  933313 main.go:143] libmachine: no network interface addresses found for domain force-systemd-flag-040357 (source=lease)
	I1217 09:30:13.609928  933313 main.go:143] libmachine: trying to list again with source=arp
	I1217 09:30:13.610265  933313 main.go:143] libmachine: unable to find current IP address of domain force-systemd-flag-040357 in network mk-force-systemd-flag-040357 (interfaces detected: [])
	I1217 09:30:13.610305  933313 retry.go:31] will retry after 1.026301637s: waiting for domain to come up
	I1217 09:30:14.638534  933313 main.go:143] libmachine: domain force-systemd-flag-040357 has defined MAC address 52:54:00:7b:8e:89 in network mk-force-systemd-flag-040357
	I1217 09:30:14.639129  933313 main.go:143] libmachine: no network interface addresses found for domain force-systemd-flag-040357 (source=lease)
	I1217 09:30:14.639145  933313 main.go:143] libmachine: trying to list again with source=arp
	I1217 09:30:14.639464  933313 main.go:143] libmachine: unable to find current IP address of domain force-systemd-flag-040357 in network mk-force-systemd-flag-040357 (interfaces detected: [])
	I1217 09:30:14.639528  933313 retry.go:31] will retry after 1.465870285s: waiting for domain to come up
	I1217 09:30:11.163980  933201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 09:30:11.188259  933201 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559 for IP: 192.168.39.212
	I1217 09:30:11.188285  933201 certs.go:195] generating shared ca certs ...
	I1217 09:30:11.188305  933201 certs.go:227] acquiring lock for ca certs: {Name:mk9975fd3c0c6324a63f90fa6e20c46f3034e6ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 09:30:11.188473  933201 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-893359/.minikube/ca.key
	I1217 09:30:11.188561  933201 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-893359/.minikube/proxy-client-ca.key
	I1217 09:30:11.188585  933201 certs.go:257] generating profile certs ...
	I1217 09:30:11.188707  933201 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/client.key
	I1217 09:30:11.188802  933201 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/apiserver.key.873948f8
	I1217 09:30:11.188878  933201 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/proxy-client.key
	I1217 09:30:11.189026  933201 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/897277.pem (1338 bytes)
	W1217 09:30:11.189071  933201 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-893359/.minikube/certs/897277_empty.pem, impossibly tiny 0 bytes
	I1217 09:30:11.189087  933201 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 09:30:11.189132  933201 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/ca.pem (1078 bytes)
	I1217 09:30:11.189179  933201 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/cert.pem (1123 bytes)
	I1217 09:30:11.189218  933201 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/certs/key.pem (1675 bytes)
	I1217 09:30:11.189281  933201 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-893359/.minikube/files/etc/ssl/certs/8972772.pem (1708 bytes)
	I1217 09:30:11.189920  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 09:30:11.223103  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 09:30:11.258163  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 09:30:11.290700  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 09:30:11.336294  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 09:30:11.373412  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 09:30:11.407240  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 09:30:11.440057  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/pause-869559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 09:30:11.481401  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 09:30:11.606882  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/certs/897277.pem --> /usr/share/ca-certificates/897277.pem (1338 bytes)
	I1217 09:30:11.692782  933201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-893359/.minikube/files/etc/ssl/certs/8972772.pem --> /usr/share/ca-certificates/8972772.pem (1708 bytes)
	I1217 09:30:11.787594  933201 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 09:30:11.868730  933201 ssh_runner.go:195] Run: openssl version
	I1217 09:30:11.884874  933201 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 09:30:11.949201  933201 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 09:30:12.020522  933201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 09:30:12.037145  933201 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 08:16 /usr/share/ca-certificates/minikubeCA.pem
	I1217 09:30:12.037251  933201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 09:30:12.053797  933201 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 09:30:12.092653  933201 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/897277.pem
	I1217 09:30:12.126174  933201 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/897277.pem /etc/ssl/certs/897277.pem
	I1217 09:30:12.159722  933201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/897277.pem
	I1217 09:30:12.176614  933201 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 08:35 /usr/share/ca-certificates/897277.pem
	I1217 09:30:12.176720  933201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/897277.pem
	I1217 09:30:12.203040  933201 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 09:30:12.223867  933201 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8972772.pem
	I1217 09:30:12.248084  933201 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8972772.pem /etc/ssl/certs/8972772.pem
	I1217 09:30:12.274635  933201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8972772.pem
	I1217 09:30:12.292395  933201 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 08:35 /usr/share/ca-certificates/8972772.pem
	I1217 09:30:12.292463  933201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8972772.pem
	I1217 09:30:12.314704  933201 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 09:30:12.347685  933201 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 09:30:12.356735  933201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 09:30:12.373390  933201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 09:30:12.402432  933201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 09:30:12.421754  933201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 09:30:12.441110  933201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 09:30:12.471578  933201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 09:30:12.496274  933201 kubeadm.go:401] StartCluster: {Name:pause-869559 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 Cl
usterName:pause-869559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 09:30:12.496454  933201 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 09:30:12.496546  933201 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 09:30:12.625666  933201 cri.go:89] found id: "e0337c6439f8b8fce3bb0b66c3433d27794c3f9fa268ba8b7117cdda47e7ac5b"
	I1217 09:30:12.625696  933201 cri.go:89] found id: "4f631c9821b5e83eed22adff990fd2581e6c144822df0e654d14bd419b364ac6"
	I1217 09:30:12.625703  933201 cri.go:89] found id: "e331a882f16a8c302e6391a48abd39817a2b42d9c81de9f1b744ae81e2a67ad7"
	I1217 09:30:12.625708  933201 cri.go:89] found id: "2abdf54511473a3eef2b8ef0906e02182f4eb5e5f0bb0c765961af5b82cfce71"
	I1217 09:30:12.625713  933201 cri.go:89] found id: "86f9b55f6bae4f650c85a2f7d60899240c9afcb2f5ab7b3a2a8a69519d939917"
	I1217 09:30:12.625718  933201 cri.go:89] found id: "47326fe04fc5bff2fe0eb071e7d7d76e2e37d6be23dcd9075195432501497e5e"
	I1217 09:30:12.625723  933201 cri.go:89] found id: "e06d3f3a0193a41ffac4306c738bb419c8aa15f440fec5d686635327ea6a97ed"
	I1217 09:30:12.625729  933201 cri.go:89] found id: ""
	I1217 09:30:12.625785  933201 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-869559 -n pause-869559
helpers_test.go:270: (dbg) Run:  kubectl --context pause-869559 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (45.42s)

                                                
                                    

Test pass (364/431)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.42
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.3/json-events 3.36
13 TestDownloadOnly/v1.34.3/preload-exists 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.08
18 TestDownloadOnly/v1.34.3/DeleteAll 0.15
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-rc.1/json-events 2.88
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.16
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.65
31 TestOffline 74.45
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 124.18
40 TestAddons/serial/GCPAuth/Namespaces 0.14
41 TestAddons/serial/GCPAuth/FakeCredentials 9.54
44 TestAddons/parallel/Registry 14.89
45 TestAddons/parallel/RegistryCreds 0.7
47 TestAddons/parallel/InspektorGadget 11.68
48 TestAddons/parallel/MetricsServer 5.76
50 TestAddons/parallel/CSI 58.39
51 TestAddons/parallel/Headlamp 20.99
52 TestAddons/parallel/CloudSpanner 6.55
53 TestAddons/parallel/LocalPath 56.09
54 TestAddons/parallel/NvidiaDevicePlugin 6.82
55 TestAddons/parallel/Yakd 11.9
57 TestAddons/StoppedEnableDisable 83.51
58 TestCertOptions 41.07
59 TestCertExpiration 291.1
61 TestForceSystemdFlag 46.79
62 TestForceSystemdEnv 77.82
67 TestErrorSpam/setup 37.73
68 TestErrorSpam/start 0.35
69 TestErrorSpam/status 0.65
70 TestErrorSpam/pause 1.46
71 TestErrorSpam/unpause 1.69
72 TestErrorSpam/stop 5.31
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 49.95
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 36.43
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.08
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.18
84 TestFunctional/serial/CacheCmd/cache/add_local 1.47
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.56
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 36.42
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.29
95 TestFunctional/serial/LogsFileCmd 1.28
96 TestFunctional/serial/InvalidService 3.98
98 TestFunctional/parallel/ConfigCmd 0.41
100 TestFunctional/parallel/DryRun 0.21
101 TestFunctional/parallel/InternationalLanguage 0.11
102 TestFunctional/parallel/StatusCmd 0.67
107 TestFunctional/parallel/AddonsCmd 0.18
108 TestFunctional/parallel/PersistentVolumeClaim 79.52
110 TestFunctional/parallel/SSHCmd 0.36
111 TestFunctional/parallel/CpCmd 1.15
112 TestFunctional/parallel/MySQL 131.35
113 TestFunctional/parallel/FileSync 0.16
114 TestFunctional/parallel/CertSync 0.98
118 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.34
122 TestFunctional/parallel/License 0.42
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
134 TestFunctional/parallel/ProfileCmd/profile_list 0.31
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
136 TestFunctional/parallel/MountCmd/any-port 64.75
137 TestFunctional/parallel/MountCmd/specific-port 1.37
138 TestFunctional/parallel/MountCmd/VerifyCleanup 0.99
139 TestFunctional/parallel/Version/short 0.07
140 TestFunctional/parallel/Version/components 0.45
141 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
142 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
143 TestFunctional/parallel/ImageCommands/ImageListJson 0.19
144 TestFunctional/parallel/ImageCommands/ImageListYaml 0.19
145 TestFunctional/parallel/ImageCommands/ImageBuild 2.17
146 TestFunctional/parallel/ImageCommands/Setup 0.95
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.28
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.82
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.22
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.76
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.54
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
157 TestFunctional/parallel/ServiceCmd/List 1.22
158 TestFunctional/parallel/ServiceCmd/JSONOutput 1.21
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 51.32
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 123.87
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.04
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 0.09
176 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 3.36
177 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 1.4
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.06
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.19
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.54
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.13
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 0.12
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 30.77
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 0.06
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 1.38
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 1.42
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 4.32
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.47
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.23
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.12
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 0.69
200 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.18
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 82.25
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.41
204 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 1.19
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 92.72
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.18
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.02
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 0.08
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.33
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.41
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.31
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.31
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.3
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 63.95
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 1.25
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 1.19
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.07
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.42
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.19
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.19
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.2
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.2
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 2.09
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.43
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.27
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 0.84
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 1.23
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.57
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.47
245 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 0.75
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.57
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.08
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.07
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.08
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 1.2
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 1.23
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 187.5
262 TestMultiControlPlane/serial/DeployApp 5.41
263 TestMultiControlPlane/serial/PingHostFromPods 1.35
264 TestMultiControlPlane/serial/AddWorkerNode 42.93
265 TestMultiControlPlane/serial/NodeLabels 0.07
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.68
267 TestMultiControlPlane/serial/CopyFile 10.75
268 TestMultiControlPlane/serial/StopSecondaryNode 3.49
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.51
270 TestMultiControlPlane/serial/RestartSecondaryNode 24.99
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.7
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 284.7
273 TestMultiControlPlane/serial/DeleteSecondaryNode 18.13
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
275 TestMultiControlPlane/serial/StopCluster 243.85
276 TestMultiControlPlane/serial/RestartCluster 75.62
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
278 TestMultiControlPlane/serial/AddSecondaryNode 75.61
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.68
284 TestJSONOutput/start/Command 58.47
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.71
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.63
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 7
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.24
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 77.34
316 TestMountStart/serial/StartWithMountFirst 19.4
317 TestMountStart/serial/VerifyMountFirst 0.31
318 TestMountStart/serial/StartWithMountSecond 21.85
319 TestMountStart/serial/VerifyMountSecond 0.31
320 TestMountStart/serial/DeleteFirst 0.69
321 TestMountStart/serial/VerifyMountPostDelete 0.31
322 TestMountStart/serial/Stop 1.34
323 TestMountStart/serial/RestartStopped 17.99
324 TestMountStart/serial/VerifyMountPostStop 0.32
327 TestMultiNode/serial/FreshStart2Nodes 98.19
328 TestMultiNode/serial/DeployApp2Nodes 4.22
329 TestMultiNode/serial/PingHostFrom2Pods 0.89
330 TestMultiNode/serial/AddNode 40.23
331 TestMultiNode/serial/MultiNodeLabels 0.06
332 TestMultiNode/serial/ProfileList 0.47
333 TestMultiNode/serial/CopyFile 6.17
334 TestMultiNode/serial/StopNode 2.23
335 TestMultiNode/serial/StartAfterStop 36.06
336 TestMultiNode/serial/RestartKeepsNodes 284.58
337 TestMultiNode/serial/DeleteNode 2.57
338 TestMultiNode/serial/StopMultiNode 172.8
339 TestMultiNode/serial/RestartMultiNode 86.31
340 TestMultiNode/serial/ValidateNameConflict 37.33
347 TestScheduledStopUnix 107.31
351 TestRunningBinaryUpgrade 369.39
353 TestKubernetesUpgrade 177.56
356 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
357 TestNoKubernetes/serial/StartWithK8s 96.42
361 TestStoppedBinaryUpgrade/Setup 0.77
362 TestStoppedBinaryUpgrade/Upgrade 103.82
367 TestNetworkPlugins/group/false 5.95
371 TestNoKubernetes/serial/StartWithStopK8s 52.26
372 TestNoKubernetes/serial/Start 40.91
381 TestPause/serial/Start 72.26
382 TestStoppedBinaryUpgrade/MinikubeLogs 1.05
383 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
384 TestNoKubernetes/serial/VerifyK8sNotRunning 0.16
385 TestNoKubernetes/serial/ProfileList 0.64
386 TestNoKubernetes/serial/Stop 1.26
387 TestNoKubernetes/serial/StartNoArgs 61.3
389 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
390 TestISOImage/Setup 33.29
392 TestISOImage/Binaries/crictl 0.2
393 TestISOImage/Binaries/curl 0.2
394 TestISOImage/Binaries/docker 0.2
395 TestISOImage/Binaries/git 0.17
396 TestISOImage/Binaries/iptables 0.2
397 TestISOImage/Binaries/podman 0.19
398 TestISOImage/Binaries/rsync 0.19
399 TestISOImage/Binaries/socat 0.19
400 TestISOImage/Binaries/wget 0.23
401 TestISOImage/Binaries/VBoxControl 0.27
402 TestISOImage/Binaries/VBoxService 0.21
403 TestNetworkPlugins/group/auto/Start 59.5
404 TestNetworkPlugins/group/kindnet/Start 82.96
405 TestNetworkPlugins/group/auto/KubeletFlags 0.2
406 TestNetworkPlugins/group/auto/NetCatPod 11.24
407 TestNetworkPlugins/group/auto/DNS 0.18
408 TestNetworkPlugins/group/auto/Localhost 0.16
409 TestNetworkPlugins/group/auto/HairPin 0.15
410 TestNetworkPlugins/group/calico/Start 85.51
411 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
412 TestNetworkPlugins/group/kindnet/KubeletFlags 0.19
413 TestNetworkPlugins/group/kindnet/NetCatPod 11.22
414 TestNetworkPlugins/group/kindnet/DNS 0.15
415 TestNetworkPlugins/group/kindnet/Localhost 0.12
416 TestNetworkPlugins/group/kindnet/HairPin 0.13
417 TestNetworkPlugins/group/custom-flannel/Start 73.09
418 TestNetworkPlugins/group/enable-default-cni/Start 80.12
419 TestNetworkPlugins/group/flannel/Start 89.39
420 TestNetworkPlugins/group/calico/ControllerPod 6.01
421 TestNetworkPlugins/group/calico/KubeletFlags 0.22
422 TestNetworkPlugins/group/calico/NetCatPod 10.3
423 TestNetworkPlugins/group/calico/DNS 0.21
424 TestNetworkPlugins/group/calico/Localhost 0.15
425 TestNetworkPlugins/group/calico/HairPin 0.17
426 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
427 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.29
428 TestNetworkPlugins/group/bridge/Start 60.19
429 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.18
430 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
431 TestNetworkPlugins/group/custom-flannel/DNS 0.2
432 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
433 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
434 TestNetworkPlugins/group/enable-default-cni/DNS 0.35
435 TestNetworkPlugins/group/enable-default-cni/Localhost 0.29
436 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
438 TestStartStop/group/old-k8s-version/serial/FirstStart 62.41
440 TestStartStop/group/no-preload/serial/FirstStart 90.01
441 TestNetworkPlugins/group/flannel/ControllerPod 6.01
442 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
443 TestNetworkPlugins/group/flannel/NetCatPod 11.29
444 TestNetworkPlugins/group/flannel/DNS 0.19
445 TestNetworkPlugins/group/flannel/Localhost 0.18
446 TestNetworkPlugins/group/flannel/HairPin 0.17
447 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
448 TestNetworkPlugins/group/bridge/NetCatPod 11.29
450 TestStartStop/group/embed-certs/serial/FirstStart 62.94
451 TestNetworkPlugins/group/bridge/DNS 0.21
452 TestNetworkPlugins/group/bridge/Localhost 0.13
453 TestNetworkPlugins/group/bridge/HairPin 0.16
454 TestStartStop/group/old-k8s-version/serial/DeployApp 8.43
456 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54.77
457 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.36
458 TestStartStop/group/old-k8s-version/serial/Stop 84.47
459 TestStartStop/group/no-preload/serial/DeployApp 8.35
460 TestStartStop/group/embed-certs/serial/DeployApp 7.31
461 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
462 TestStartStop/group/no-preload/serial/Stop 90.36
463 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.97
464 TestStartStop/group/embed-certs/serial/Stop 85.99
465 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
466 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.88
467 TestStartStop/group/default-k8s-diff-port/serial/Stop 83.93
468 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.14
469 TestStartStop/group/old-k8s-version/serial/SecondStart 53.34
470 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.15
471 TestStartStop/group/no-preload/serial/SecondStart 86.28
472 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
473 TestStartStop/group/embed-certs/serial/SecondStart 60.02
474 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
475 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
476 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
477 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 70.75
478 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
479 TestStartStop/group/old-k8s-version/serial/Pause 2.94
481 TestStartStop/group/newest-cni/serial/FirstStart 69.89
482 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.01
483 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
484 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
485 TestStartStop/group/embed-certs/serial/Pause 3.42
487 TestISOImage/PersistentMounts//data 0.22
488 TestISOImage/PersistentMounts//var/lib/docker 0.22
489 TestISOImage/PersistentMounts//var/lib/cni 0.23
490 TestISOImage/PersistentMounts//var/lib/kubelet 0.23
491 TestISOImage/PersistentMounts//var/lib/minikube 0.22
492 TestISOImage/PersistentMounts//var/lib/toolbox 0.25
493 TestISOImage/PersistentMounts//var/lib/boot2docker 0.23
494 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.01
495 TestISOImage/VersionJSON 0.21
496 TestISOImage/eBPFSupport 0.24
497 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
498 TestStartStop/group/newest-cni/serial/DeployApp 0
499 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.17
500 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
501 TestStartStop/group/newest-cni/serial/Stop 7.2
502 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
503 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
504 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.95
505 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
506 TestStartStop/group/no-preload/serial/Pause 3.04
507 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
508 TestStartStop/group/newest-cni/serial/SecondStart 30.68
509 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
510 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
511 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
512 TestStartStop/group/newest-cni/serial/Pause 2.22
x
+
TestDownloadOnly/v1.28.0/json-events (6.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-516831 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-516831 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.424551687s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1217 08:15:31.728541  897277 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1217 08:15:31.728627  897277 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-893359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-516831
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-516831: exit status 85 (76.148357ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-516831 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-516831 │ jenkins │ v1.37.0 │ 17 Dec 25 08:15 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:15:25
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:15:25.357000  897290 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:15:25.357290  897290 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:15:25.357301  897290 out.go:374] Setting ErrFile to fd 2...
	I1217 08:15:25.357306  897290 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:15:25.357486  897290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	W1217 08:15:25.357618  897290 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22182-893359/.minikube/config/config.json: open /home/jenkins/minikube-integration/22182-893359/.minikube/config/config.json: no such file or directory
	I1217 08:15:25.358078  897290 out.go:368] Setting JSON to true
	I1217 08:15:25.359046  897290 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10671,"bootTime":1765948654,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:15:25.359101  897290 start.go:143] virtualization: kvm guest
	I1217 08:15:25.362082  897290 out.go:99] [download-only-516831] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1217 08:15:25.362242  897290 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22182-893359/.minikube/cache/preloaded-tarball: no such file or directory
	I1217 08:15:25.362277  897290 notify.go:221] Checking for updates...
	I1217 08:15:25.363406  897290 out.go:171] MINIKUBE_LOCATION=22182
	I1217 08:15:25.364656  897290 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:15:25.365847  897290 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	I1217 08:15:25.366858  897290 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 08:15:25.367945  897290 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 08:15:25.369894  897290 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 08:15:25.370110  897290 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:15:25.399083  897290 out.go:99] Using the kvm2 driver based on user configuration
	I1217 08:15:25.399115  897290 start.go:309] selected driver: kvm2
	I1217 08:15:25.399122  897290 start.go:927] validating driver "kvm2" against <nil>
	I1217 08:15:25.399427  897290 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 08:15:25.399925  897290 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1217 08:15:25.400073  897290 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 08:15:25.400101  897290 cni.go:84] Creating CNI manager for ""
	I1217 08:15:25.400149  897290 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 08:15:25.400158  897290 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 08:15:25.400203  897290 start.go:353] cluster config:
	{Name:download-only-516831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-516831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:15:25.400370  897290 iso.go:125] acquiring lock: {Name:mk258687bf3be9c6817f84af5b9e08a4f47b5420 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 08:15:25.401767  897290 out.go:99] Downloading VM boot image ...
	I1217 08:15:25.401795  897290 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22182-893359/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso
	I1217 08:15:28.191349  897290 out.go:99] Starting "download-only-516831" primary control-plane node in "download-only-516831" cluster
	I1217 08:15:28.191397  897290 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 08:15:28.231400  897290 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1217 08:15:28.231445  897290 cache.go:65] Caching tarball of preloaded images
	I1217 08:15:28.231655  897290 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 08:15:28.233421  897290 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1217 08:15:28.233441  897290 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1217 08:15:28.261987  897290 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1217 08:15:28.262144  897290 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22182-893359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-516831 host does not exist
	  To start a cluster, run: "minikube start -p download-only-516831"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-516831
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (3.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-435019 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-435019 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.359649779s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (3.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1217 08:15:35.478455  897277 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
I1217 08:15:35.478500  897277 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-893359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-435019
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-435019: exit status 85 (74.857794ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-516831 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-516831 │ jenkins │ v1.37.0 │ 17 Dec 25 08:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 08:15 UTC │ 17 Dec 25 08:15 UTC │
	│ delete  │ -p download-only-516831                                                                                                                                                 │ download-only-516831 │ jenkins │ v1.37.0 │ 17 Dec 25 08:15 UTC │ 17 Dec 25 08:15 UTC │
	│ start   │ -o=json --download-only -p download-only-435019 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-435019 │ jenkins │ v1.37.0 │ 17 Dec 25 08:15 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:15:32
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:15:32.173206  897469 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:15:32.173544  897469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:15:32.173555  897469 out.go:374] Setting ErrFile to fd 2...
	I1217 08:15:32.173560  897469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:15:32.173783  897469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 08:15:32.174242  897469 out.go:368] Setting JSON to true
	I1217 08:15:32.175253  897469 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10678,"bootTime":1765948654,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:15:32.175318  897469 start.go:143] virtualization: kvm guest
	I1217 08:15:32.177110  897469 out.go:99] [download-only-435019] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:15:32.177352  897469 notify.go:221] Checking for updates...
	I1217 08:15:32.178627  897469 out.go:171] MINIKUBE_LOCATION=22182
	I1217 08:15:32.180393  897469 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:15:32.181989  897469 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	I1217 08:15:32.183327  897469 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 08:15:32.184679  897469 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-435019 host does not exist
	  To start a cluster, run: "minikube start -p download-only-435019"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-435019
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (2.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-077551 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-077551 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (2.879980642s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (2.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1217 08:15:38.732457  897277 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
I1217 08:15:38.732503  897277 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-893359/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-077551
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-077551: exit status 85 (77.223194ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                     │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-516831 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio      │ download-only-516831 │ jenkins │ v1.37.0 │ 17 Dec 25 08:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 08:15 UTC │ 17 Dec 25 08:15 UTC │
	│ delete  │ -p download-only-516831                                                                                                                                                      │ download-only-516831 │ jenkins │ v1.37.0 │ 17 Dec 25 08:15 UTC │ 17 Dec 25 08:15 UTC │
	│ start   │ -o=json --download-only -p download-only-435019 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio      │ download-only-435019 │ jenkins │ v1.37.0 │ 17 Dec 25 08:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 08:15 UTC │ 17 Dec 25 08:15 UTC │
	│ delete  │ -p download-only-435019                                                                                                                                                      │ download-only-435019 │ jenkins │ v1.37.0 │ 17 Dec 25 08:15 UTC │ 17 Dec 25 08:15 UTC │
	│ start   │ -o=json --download-only -p download-only-077551 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-077551 │ jenkins │ v1.37.0 │ 17 Dec 25 08:15 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 08:15:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 08:15:35.906877  897647 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:15:35.907268  897647 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:15:35.907285  897647 out.go:374] Setting ErrFile to fd 2...
	I1217 08:15:35.907292  897647 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:15:35.907805  897647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 08:15:35.908848  897647 out.go:368] Setting JSON to true
	I1217 08:15:35.909693  897647 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10682,"bootTime":1765948654,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:15:35.909787  897647 start.go:143] virtualization: kvm guest
	I1217 08:15:35.911443  897647 out.go:99] [download-only-077551] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:15:35.911605  897647 notify.go:221] Checking for updates...
	I1217 08:15:35.912857  897647 out.go:171] MINIKUBE_LOCATION=22182
	I1217 08:15:35.914177  897647 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:15:35.915592  897647 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	I1217 08:15:35.916830  897647 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 08:15:35.917969  897647 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-077551 host does not exist
	  To start a cluster, run: "minikube start -p download-only-077551"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-077551
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1217 08:15:39.556874  897277 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-293670 --alsologtostderr --binary-mirror http://127.0.0.1:34967 --driver=kvm2  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-293670" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-293670
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (74.45s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-012788 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-012788 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m13.548502274s)
helpers_test.go:176: Cleaning up "offline-crio-012788" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-012788
--- PASS: TestOffline (74.45s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-102582
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-102582: exit status 85 (64.9971ms)

                                                
                                                
-- stdout --
	* Profile "addons-102582" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-102582"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-102582
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-102582: exit status 85 (66.890887ms)

                                                
                                                
-- stdout --
	* Profile "addons-102582" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-102582"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (124.18s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-102582 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-102582 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m4.175021428s)
--- PASS: TestAddons/Setup (124.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-102582 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-102582 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-102582 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-102582 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [5ba9b1e5-27b4-431a-8877-f939828de8e0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [5ba9b1e5-27b4-431a-8877-f939828de8e0] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.005117415s
addons_test.go:696: (dbg) Run:  kubectl --context addons-102582 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-102582 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-102582 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 8.337109ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-zcqnn" [346d82ea-f7c0-41b3-b452-62f34e93ba28] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004102581s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-5h8sx" [92a36d32-ba2f-41a8-9981-9a149c8411c5] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00678592s
addons_test.go:394: (dbg) Run:  kubectl --context addons-102582 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-102582 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-102582 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.082880699s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-102582 ip
2025/12/17 08:18:17 [DEBUG] GET http://192.168.39.110:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102582 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.89s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.7s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 4.43447ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-102582
addons_test.go:334: (dbg) Run:  kubectl --context addons-102582 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102582 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.70s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.68s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-vtqlc" [fe0d66e9-f16c-450b-8ef9-9e3a7ab15fd9] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005159894s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102582 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-102582 addons disable inspektor-gadget --alsologtostderr -v=1: (5.671798122s)
--- PASS: TestAddons/parallel/InspektorGadget (11.68s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.76s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 10.298929ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-pldbl" [ce43b5f2-7eab-4146-ba0f-023fd611c7ef] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004899386s
addons_test.go:465: (dbg) Run:  kubectl --context addons-102582 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102582 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.39s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1217 08:18:21.771801  897277 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1217 08:18:21.777102  897277 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1217 08:18:21.777126  897277 kapi.go:107] duration metric: took 5.364556ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 5.376911ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-102582 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-102582 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [76c48d2c-6cb9-4a2b-8cfa-434147149019] Pending
helpers_test.go:353: "task-pv-pod" [76c48d2c-6cb9-4a2b-8cfa-434147149019] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [76c48d2c-6cb9-4a2b-8cfa-434147149019] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003878492s
addons_test.go:574: (dbg) Run:  kubectl --context addons-102582 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-102582 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-102582 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-102582 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-102582 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-102582 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-102582 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [d76f928f-ac4d-4f7c-b6a0-f44fef072254] Pending
helpers_test.go:353: "task-pv-pod-restore" [d76f928f-ac4d-4f7c-b6a0-f44fef072254] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [d76f928f-ac4d-4f7c-b6a0-f44fef072254] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003488068s
addons_test.go:616: (dbg) Run:  kubectl --context addons-102582 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-102582 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-102582 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102582 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102582 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-102582 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.988414086s)
--- PASS: TestAddons/parallel/CSI (58.39s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-102582 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-102582 --alsologtostderr -v=1: (1.104037795s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-8s79b" [94fef757-75cc-4f89-8101-c6ed092bcb8e] Pending
helpers_test.go:353: "headlamp-dfcdc64b-8s79b" [94fef757-75cc-4f89-8101-c6ed092bcb8e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-8s79b" [94fef757-75cc-4f89-8101-c6ed092bcb8e] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.009722829s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102582 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-102582 addons disable headlamp --alsologtostderr -v=1: (5.875657942s)
--- PASS: TestAddons/parallel/Headlamp (20.99s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-c2nzm" [e170caf1-1a06-4665-ab77-2bc97f5cf4ba] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004010349s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102582 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.09s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-102582 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-102582 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-102582 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [e459016c-1f84-467a-b7cc-0b050be8dfad] Pending
helpers_test.go:353: "test-local-path" [e459016c-1f84-467a-b7cc-0b050be8dfad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [e459016c-1f84-467a-b7cc-0b050be8dfad] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [e459016c-1f84-467a-b7cc-0b050be8dfad] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.007237023s
addons_test.go:969: (dbg) Run:  kubectl --context addons-102582 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-102582 ssh "cat /opt/local-path-provisioner/pvc-ded9037b-0c48-4dd9-8dfc-ab5a0107bbd1_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-102582 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-102582 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102582 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-102582 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.1242635s)
--- PASS: TestAddons/parallel/LocalPath (56.09s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.82s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-n49qb" [1415b907-fbf3-403f-b62c-ae1fe98ef8d1] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005566005s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102582 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.82s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-slhgl" [4dcbf04e-4e4c-409a-bb40-19ad3ba6cb6f] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006145046s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102582 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-102582 addons disable yakd --alsologtostderr -v=1: (5.896284099s)
--- PASS: TestAddons/parallel/Yakd (11.90s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (83.51s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-102582
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-102582: (1m23.300928973s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-102582
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-102582
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-102582
--- PASS: TestAddons/StoppedEnableDisable (83.51s)

                                                
                                    
x
+
TestCertOptions (41.07s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-757731 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-757731 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (39.772105312s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-757731 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-757731 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-757731 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-757731" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-757731
--- PASS: TestCertOptions (41.07s)

                                                
                                    
x
+
TestCertExpiration (291.1s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-779044 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-779044 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m9.122510333s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-779044 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-779044 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (41.077567876s)
helpers_test.go:176: Cleaning up "cert-expiration-779044" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-779044
--- PASS: TestCertExpiration (291.10s)

                                                
                                    
x
+
TestForceSystemdFlag (46.79s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-040357 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-040357 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.735553233s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-040357 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-040357" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-040357
--- PASS: TestForceSystemdFlag (46.79s)

                                                
                                    
x
+
TestForceSystemdEnv (77.82s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-173918 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-173918 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m16.302396644s)
helpers_test.go:176: Cleaning up "force-systemd-env-173918" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-173918
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-173918: (1.517054749s)
--- PASS: TestForceSystemdEnv (77.82s)

                                                
                                    
x
+
TestErrorSpam/setup (37.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-291515 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-291515 --driver=kvm2  --container-runtime=crio
E1217 08:22:45.367178  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:22:45.373619  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:22:45.385029  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:22:45.406734  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:22:45.448234  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:22:45.529759  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:22:45.691541  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:22:46.013364  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:22:46.655472  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:22:47.937166  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:22:50.500102  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:22:55.621766  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-291515 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-291515 --driver=kvm2  --container-runtime=crio: (37.731808534s)
--- PASS: TestErrorSpam/setup (37.73s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291515 --log_dir /tmp/nospam-291515 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291515 --log_dir /tmp/nospam-291515 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291515 --log_dir /tmp/nospam-291515 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291515 --log_dir /tmp/nospam-291515 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291515 --log_dir /tmp/nospam-291515 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291515 --log_dir /tmp/nospam-291515 status
--- PASS: TestErrorSpam/status (0.65s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291515 --log_dir /tmp/nospam-291515 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291515 --log_dir /tmp/nospam-291515 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291515 --log_dir /tmp/nospam-291515 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291515 --log_dir /tmp/nospam-291515 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291515 --log_dir /tmp/nospam-291515 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291515 --log_dir /tmp/nospam-291515 unpause
E1217 08:23:05.864079  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (5.31s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291515 --log_dir /tmp/nospam-291515 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-291515 --log_dir /tmp/nospam-291515 stop: (1.818826902s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291515 --log_dir /tmp/nospam-291515 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-291515 --log_dir /tmp/nospam-291515 stop: (1.634418334s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-291515 --log_dir /tmp/nospam-291515 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-291515 --log_dir /tmp/nospam-291515 stop: (1.85949045s)
--- PASS: TestErrorSpam/stop (5.31s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22182-893359/.minikube/files/etc/test/nested/copy/897277/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.95s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-122342 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1217 08:23:26.345611  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-122342 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (49.949322065s)
--- PASS: TestFunctional/serial/StartWithProxy (49.95s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.43s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1217 08:24:02.002715  897277 config.go:182] Loaded profile config "functional-122342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-122342 --alsologtostderr -v=8
E1217 08:24:07.308198  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-122342 --alsologtostderr -v=8: (36.42755659s)
functional_test.go:678: soft start took 36.428375785s for "functional-122342" cluster.
I1217 08:24:38.430679  897277 config.go:182] Loaded profile config "functional-122342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (36.43s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-122342 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-122342 cache add registry.k8s.io/pause:3.1: (1.00253136s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-122342 cache add registry.k8s.io/pause:3.3: (1.13358393s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-122342 cache add registry.k8s.io/pause:latest: (1.045817453s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-122342 /tmp/TestFunctionalserialCacheCmdcacheadd_local2196522573/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 cache add minikube-local-cache-test:functional-122342
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-122342 cache add minikube-local-cache-test:functional-122342: (1.128018481s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 cache delete minikube-local-cache-test:functional-122342
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-122342
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-122342 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (179.169596ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 kubectl -- --context functional-122342 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-122342 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.42s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-122342 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-122342 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.416260135s)
functional_test.go:776: restart took 36.41645075s for "functional-122342" cluster.
I1217 08:25:21.859900  897277 config.go:182] Loaded profile config "functional-122342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (36.42s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-122342 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-122342 logs: (1.291355101s)
--- PASS: TestFunctional/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 logs --file /tmp/TestFunctionalserialLogsFileCmd2070943865/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-122342 logs --file /tmp/TestFunctionalserialLogsFileCmd2070943865/001/logs.txt: (1.27865627s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.98s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-122342 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-122342
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-122342: exit status 115 (252.446389ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.97:32422 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-122342 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-122342 config get cpus: exit status 14 (64.815479ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-122342 config get cpus: exit status 14 (69.788862ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-122342 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-122342 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (107.0178ms)

                                                
                                                
-- stdout --
	* [functional-122342] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 08:26:39.281467  903820 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:26:39.281740  903820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:26:39.281755  903820 out.go:374] Setting ErrFile to fd 2...
	I1217 08:26:39.281762  903820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:26:39.282311  903820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 08:26:39.283219  903820 out.go:368] Setting JSON to false
	I1217 08:26:39.284110  903820 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11345,"bootTime":1765948654,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:26:39.284194  903820 start.go:143] virtualization: kvm guest
	I1217 08:26:39.285704  903820 out.go:179] * [functional-122342] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:26:39.287082  903820 notify.go:221] Checking for updates...
	I1217 08:26:39.287091  903820 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:26:39.288222  903820 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:26:39.289626  903820 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	I1217 08:26:39.290666  903820 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 08:26:39.291717  903820 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:26:39.292778  903820 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:26:39.294273  903820 config.go:182] Loaded profile config "functional-122342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:26:39.294955  903820 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:26:39.325012  903820 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 08:26:39.325940  903820 start.go:309] selected driver: kvm2
	I1217 08:26:39.325960  903820 start.go:927] validating driver "kvm2" against &{Name:functional-122342 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-122342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:26:39.326049  903820 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:26:39.327838  903820 out.go:203] 
	W1217 08:26:39.328969  903820 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 08:26:39.329920  903820 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-122342 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-122342 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-122342 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (109.993457ms)

                                                
                                                
-- stdout --
	* [functional-122342] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 08:26:38.755866  903790 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:26:38.755981  903790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:26:38.755992  903790 out.go:374] Setting ErrFile to fd 2...
	I1217 08:26:38.755997  903790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:26:38.756311  903790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 08:26:38.756749  903790 out.go:368] Setting JSON to false
	I1217 08:26:38.757687  903790 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11345,"bootTime":1765948654,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:26:38.757751  903790 start.go:143] virtualization: kvm guest
	I1217 08:26:38.759645  903790 out.go:179] * [functional-122342] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 08:26:38.760774  903790 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:26:38.760787  903790 notify.go:221] Checking for updates...
	I1217 08:26:38.762959  903790 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:26:38.764169  903790 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	I1217 08:26:38.765295  903790 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 08:26:38.766285  903790 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:26:38.767304  903790 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:26:38.768753  903790 config.go:182] Loaded profile config "functional-122342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:26:38.769234  903790 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:26:38.798116  903790 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1217 08:26:38.799047  903790 start.go:309] selected driver: kvm2
	I1217 08:26:38.799061  903790 start.go:927] validating driver "kvm2" against &{Name:functional-122342 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-122342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:26:38.799154  903790 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:26:38.800972  903790 out.go:203] 
	W1217 08:26:38.801819  903790 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 08:26:38.802794  903790 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (79.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [2b2d63c8-592a-4d1c-a3e6-cfcc8c3d6ee9] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003785659s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-122342 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-122342 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-122342 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-122342 apply -f testdata/storage-provisioner/pod.yaml
I1217 08:25:34.942251  897277 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [2ccde94a-f31b-45d6-a0dc-56b1c9c65987] Pending
helpers_test.go:353: "sp-pod" [2ccde94a-f31b-45d6-a0dc-56b1c9c65987] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [2ccde94a-f31b-45d6-a0dc-56b1c9c65987] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 1m6.007348272s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-122342 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-122342 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-122342 apply -f testdata/storage-provisioner/pod.yaml
I1217 08:26:41.965881  897277 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [db4392ab-d4a8-4176-a0d5-5e79bc4d77ed] Pending
helpers_test.go:353: "sp-pod" [db4392ab-d4a8-4176-a0d5-5e79bc4d77ed] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004297002s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-122342 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (79.52s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh -n functional-122342 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 cp functional-122342:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2452225889/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh -n functional-122342 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh -n functional-122342 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (131.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-122342 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-g9l2q" [2acc9aa8-e16c-4de2-8104-2731803a9cc0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E1217 08:27:45.365235  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:28:13.071791  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "mysql-6bcdcbc558-g9l2q" [2acc9aa8-e16c-4de2-8104-2731803a9cc0] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 2m5.006039386s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-122342 exec mysql-6bcdcbc558-g9l2q -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-122342 exec mysql-6bcdcbc558-g9l2q -- mysql -ppassword -e "show databases;": exit status 1 (176.068772ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 08:29:01.179767  897277 retry.go:31] will retry after 1.229137246s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-122342 exec mysql-6bcdcbc558-g9l2q -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-122342 exec mysql-6bcdcbc558-g9l2q -- mysql -ppassword -e "show databases;": exit status 1 (175.602108ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 08:29:02.585401  897277 retry.go:31] will retry after 1.437420222s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-122342 exec mysql-6bcdcbc558-g9l2q -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-122342 exec mysql-6bcdcbc558-g9l2q -- mysql -ppassword -e "show databases;": exit status 1 (122.015181ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 08:29:04.145164  897277 retry.go:31] will retry after 2.890557646s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-122342 exec mysql-6bcdcbc558-g9l2q -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (131.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/897277/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh "sudo cat /etc/test/nested/copy/897277/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/897277.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh "sudo cat /etc/ssl/certs/897277.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/897277.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh "sudo cat /usr/share/ca-certificates/897277.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/8972772.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh "sudo cat /etc/ssl/certs/8972772.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/8972772.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh "sudo cat /usr/share/ca-certificates/8972772.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-122342 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-122342 ssh "sudo systemctl is-active docker": exit status 1 (164.515926ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-122342 ssh "sudo systemctl is-active containerd": exit status 1 (172.477573ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "245.82252ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "65.879646ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "249.945339ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "62.714082ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (64.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-122342 /tmp/TestFunctionalparallelMountCmdany-port911539876/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765959930584057978" to /tmp/TestFunctionalparallelMountCmdany-port911539876/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765959930584057978" to /tmp/TestFunctionalparallelMountCmdany-port911539876/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765959930584057978" to /tmp/TestFunctionalparallelMountCmdany-port911539876/001/test-1765959930584057978
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-122342 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (160.111604ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 08:25:30.744440  897277 retry.go:31] will retry after 259.596877ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 08:25 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 08:25 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 08:25 test-1765959930584057978
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh cat /mount-9p/test-1765959930584057978
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-122342 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [ab11f47f-3405-419b-90f6-1e11c8e5cd9e] Pending
helpers_test.go:353: "busybox-mount" [ab11f47f-3405-419b-90f6-1e11c8e5cd9e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [ab11f47f-3405-419b-90f6-1e11c8e5cd9e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [ab11f47f-3405-419b-90f6-1e11c8e5cd9e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 1m3.004324121s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-122342 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-122342 /tmp/TestFunctionalparallelMountCmdany-port911539876/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (64.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-122342 /tmp/TestFunctionalparallelMountCmdspecific-port2256027264/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-122342 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (190.531602ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 08:26:35.522548  897277 retry.go:31] will retry after 502.451179ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-122342 /tmp/TestFunctionalparallelMountCmdspecific-port2256027264/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-122342 ssh "sudo umount -f /mount-9p": exit status 1 (159.364389ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-122342 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-122342 /tmp/TestFunctionalparallelMountCmdspecific-port2256027264/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-122342 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3180331613/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-122342 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3180331613/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-122342 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3180331613/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-122342 ssh "findmnt -T" /mount1: exit status 1 (175.191439ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 08:26:36.879219  897277 retry.go:31] will retry after 283.98127ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-122342 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-122342 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3180331613/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-122342 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3180331613/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-122342 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3180331613/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-122342 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-122342
localhost/kicbase/echo-server:functional-122342
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-122342 image ls --format short --alsologtostderr:
I1217 08:29:07.736476  904755 out.go:360] Setting OutFile to fd 1 ...
I1217 08:29:07.736607  904755 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:29:07.736616  904755 out.go:374] Setting ErrFile to fd 2...
I1217 08:29:07.736620  904755 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:29:07.736828  904755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
I1217 08:29:07.737699  904755 config.go:182] Loaded profile config "functional-122342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 08:29:07.737859  904755 config.go:182] Loaded profile config "functional-122342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 08:29:07.740275  904755 ssh_runner.go:195] Run: systemctl --version
I1217 08:29:07.742453  904755 main.go:143] libmachine: domain functional-122342 has defined MAC address 52:54:00:ba:6d:2c in network mk-functional-122342
I1217 08:29:07.742827  904755 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:6d:2c", ip: ""} in network mk-functional-122342: {Iface:virbr1 ExpiryTime:2025-12-17 09:23:27 +0000 UTC Type:0 Mac:52:54:00:ba:6d:2c Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-122342 Clientid:01:52:54:00:ba:6d:2c}
I1217 08:29:07.742852  904755 main.go:143] libmachine: domain functional-122342 has defined IP address 192.168.39.97 and MAC address 52:54:00:ba:6d:2c in network mk-functional-122342
I1217 08:29:07.743008  904755 sshutil.go:56] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/functional-122342/id_rsa Username:docker}
I1217 08:29:07.826821  904755 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-122342 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager │ v1.34.3            │ 5826b25d990d7 │ 76MB   │
│ localhost/minikube-local-cache-test     │ functional-122342  │ e1c7d37f901be │ 3.33kB │
│ registry.k8s.io/kube-apiserver          │ v1.34.3            │ aa27095f56193 │ 89.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/kicbase/echo-server           │ functional-122342  │ 9056ab77afb8e │ 4.94MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.3            │ aec12dadf56dd │ 53.9MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ localhost/my-image                      │ functional-122342  │ 55481a8027d4c │ 1.47MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-proxy              │ v1.34.3            │ 36eef8e07bdd6 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-122342 image ls --format table --alsologtostderr:
I1217 08:29:10.276822  904810 out.go:360] Setting OutFile to fd 1 ...
I1217 08:29:10.276937  904810 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:29:10.276946  904810 out.go:374] Setting ErrFile to fd 2...
I1217 08:29:10.276949  904810 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:29:10.277130  904810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
I1217 08:29:10.277684  904810 config.go:182] Loaded profile config "functional-122342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 08:29:10.277776  904810 config.go:182] Loaded profile config "functional-122342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 08:29:10.279887  904810 ssh_runner.go:195] Run: systemctl --version
I1217 08:29:10.282111  904810 main.go:143] libmachine: domain functional-122342 has defined MAC address 52:54:00:ba:6d:2c in network mk-functional-122342
I1217 08:29:10.282534  904810 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:6d:2c", ip: ""} in network mk-functional-122342: {Iface:virbr1 ExpiryTime:2025-12-17 09:23:27 +0000 UTC Type:0 Mac:52:54:00:ba:6d:2c Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-122342 Clientid:01:52:54:00:ba:6d:2c}
I1217 08:29:10.282560  904810 main.go:143] libmachine: domain functional-122342 has defined IP address 192.168.39.97 and MAC address 52:54:00:ba:6d:2c in network mk-functional-122342
I1217 08:29:10.282747  904810 sshutil.go:56] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/functional-122342/id_rsa Username:docker}
I1217 08:29:10.364997  904810 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-122342 image ls --format json --alsologtostderr:
[{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-122342"],"siz
e":"4943877"},{"id":"5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954","registry.k8s.io/kube-controller-manager@sha256:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"76004183"},{"id":"36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691","repoDigests":["registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6","registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"73145241"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"c
d073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s
-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"55481a8027d4ca4ecc6840cd91e3bdca435ba4f2967155d10d901b556b8355cc","repoDigests":["localhost/my-image@sha256:0ee1819d6e14b27a77d13d67110125c33d397bf8269478761e88e2e425927069"],"repoTags":["localhost/my-image:functional-122342"],"size":"1468600"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460","registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc9499
1e51c601ba6a00369b45064243ba7822143b286edb9d41f9e"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"89050097"},{"id":"aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78","repoDigests":["registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9","registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"53853013"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"0673980e9c3c010847c34101f0b4002baf69a899dc1dcb689cbcbce7fa276751","repoDigests":["docker.io/library/d2a8bb6e030d35c93c9972353ca559f8669823b8cd395825
ef02c4971751a1fd-tmp@sha256:a6fec3b1925d7078e7bf5d8b4429896e7a95110a395c4fb16bf4261cb720cb9c"],"repoTags":[],"size":"1466018"},{"id":"e1c7d37f901bec57e9640f4323b5bde0929d5e22f0f069c32ba970d569d6b8b0","repoDigests":["localhost/minikube-local-cache-test@sha256:778c23b9bfa9b6c96d0e8f3faeb92bb921972a5397fdf81a9295da61f932dcf1"],"repoTags":["localhost/minikube-local-cache-test:functional-122342"],"size":"3330"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha
256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-122342 image ls --format json --alsologtostderr:
I1217 08:29:10.475743  904821 out.go:360] Setting OutFile to fd 1 ...
I1217 08:29:10.475836  904821 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:29:10.475844  904821 out.go:374] Setting ErrFile to fd 2...
I1217 08:29:10.475848  904821 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:29:10.476096  904821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
I1217 08:29:10.476738  904821 config.go:182] Loaded profile config "functional-122342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 08:29:10.476832  904821 config.go:182] Loaded profile config "functional-122342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 08:29:10.478846  904821 ssh_runner.go:195] Run: systemctl --version
I1217 08:29:10.480893  904821 main.go:143] libmachine: domain functional-122342 has defined MAC address 52:54:00:ba:6d:2c in network mk-functional-122342
I1217 08:29:10.481326  904821 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:6d:2c", ip: ""} in network mk-functional-122342: {Iface:virbr1 ExpiryTime:2025-12-17 09:23:27 +0000 UTC Type:0 Mac:52:54:00:ba:6d:2c Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-122342 Clientid:01:52:54:00:ba:6d:2c}
I1217 08:29:10.481355  904821 main.go:143] libmachine: domain functional-122342 has defined IP address 192.168.39.97 and MAC address 52:54:00:ba:6d:2c in network mk-functional-122342
I1217 08:29:10.481533  904821 sshutil.go:56] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/functional-122342/id_rsa Username:docker}
I1217 08:29:10.565390  904821 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-122342 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954
- registry.k8s.io/kube-controller-manager@sha256:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "76004183"
- id: aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9
- registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "53853013"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: e1c7d37f901bec57e9640f4323b5bde0929d5e22f0f069c32ba970d569d6b8b0
repoDigests:
- localhost/minikube-local-cache-test@sha256:778c23b9bfa9b6c96d0e8f3faeb92bb921972a5397fdf81a9295da61f932dcf1
repoTags:
- localhost/minikube-local-cache-test:functional-122342
size: "3330"
- id: 36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691
repoDigests:
- registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6
- registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "73145241"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460
- registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc94991e51c601ba6a00369b45064243ba7822143b286edb9d41f9e
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "89050097"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-122342
size: "4943877"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-122342 image ls --format yaml --alsologtostderr:
I1217 08:29:07.924070  904766 out.go:360] Setting OutFile to fd 1 ...
I1217 08:29:07.924344  904766 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:29:07.924354  904766 out.go:374] Setting ErrFile to fd 2...
I1217 08:29:07.924358  904766 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:29:07.924553  904766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
I1217 08:29:07.925100  904766 config.go:182] Loaded profile config "functional-122342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 08:29:07.925197  904766 config.go:182] Loaded profile config "functional-122342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 08:29:07.927323  904766 ssh_runner.go:195] Run: systemctl --version
I1217 08:29:07.929446  904766 main.go:143] libmachine: domain functional-122342 has defined MAC address 52:54:00:ba:6d:2c in network mk-functional-122342
I1217 08:29:07.929862  904766 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:6d:2c", ip: ""} in network mk-functional-122342: {Iface:virbr1 ExpiryTime:2025-12-17 09:23:27 +0000 UTC Type:0 Mac:52:54:00:ba:6d:2c Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-122342 Clientid:01:52:54:00:ba:6d:2c}
I1217 08:29:07.929887  904766 main.go:143] libmachine: domain functional-122342 has defined IP address 192.168.39.97 and MAC address 52:54:00:ba:6d:2c in network mk-functional-122342
I1217 08:29:07.930014  904766 sshutil.go:56] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/functional-122342/id_rsa Username:docker}
I1217 08:29:08.014059  904766 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-122342 ssh pgrep buildkitd: exit status 1 (157.805928ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 image build -t localhost/my-image:functional-122342 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-122342 image build -t localhost/my-image:functional-122342 testdata/build --alsologtostderr: (1.814158068s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-122342 image build -t localhost/my-image:functional-122342 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0673980e9c3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-122342
--> 55481a8027d
Successfully tagged localhost/my-image:functional-122342
55481a8027d4ca4ecc6840cd91e3bdca435ba4f2967155d10d901b556b8355cc
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-122342 image build -t localhost/my-image:functional-122342 testdata/build --alsologtostderr:
I1217 08:29:08.266804  904788 out.go:360] Setting OutFile to fd 1 ...
I1217 08:29:08.267086  904788 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:29:08.267096  904788 out.go:374] Setting ErrFile to fd 2...
I1217 08:29:08.267100  904788 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:29:08.267293  904788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
I1217 08:29:08.267823  904788 config.go:182] Loaded profile config "functional-122342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 08:29:08.268654  904788 config.go:182] Loaded profile config "functional-122342": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 08:29:08.270694  904788 ssh_runner.go:195] Run: systemctl --version
I1217 08:29:08.272789  904788 main.go:143] libmachine: domain functional-122342 has defined MAC address 52:54:00:ba:6d:2c in network mk-functional-122342
I1217 08:29:08.273168  904788 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:6d:2c", ip: ""} in network mk-functional-122342: {Iface:virbr1 ExpiryTime:2025-12-17 09:23:27 +0000 UTC Type:0 Mac:52:54:00:ba:6d:2c Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-122342 Clientid:01:52:54:00:ba:6d:2c}
I1217 08:29:08.273196  904788 main.go:143] libmachine: domain functional-122342 has defined IP address 192.168.39.97 and MAC address 52:54:00:ba:6d:2c in network mk-functional-122342
I1217 08:29:08.273338  904788 sshutil.go:56] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/functional-122342/id_rsa Username:docker}
I1217 08:29:08.355538  904788 build_images.go:162] Building image from path: /tmp/build.4144872376.tar
I1217 08:29:08.355619  904788 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 08:29:08.368740  904788 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4144872376.tar
I1217 08:29:08.374048  904788 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4144872376.tar: stat -c "%s %y" /var/lib/minikube/build/build.4144872376.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4144872376.tar': No such file or directory
I1217 08:29:08.374078  904788 ssh_runner.go:362] scp /tmp/build.4144872376.tar --> /var/lib/minikube/build/build.4144872376.tar (3072 bytes)
I1217 08:29:08.411558  904788 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4144872376
I1217 08:29:08.425074  904788 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4144872376 -xf /var/lib/minikube/build/build.4144872376.tar
I1217 08:29:08.436731  904788 crio.go:315] Building image: /var/lib/minikube/build/build.4144872376
I1217 08:29:08.436832  904788 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-122342 /var/lib/minikube/build/build.4144872376 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1217 08:29:09.993072  904788 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-122342 /var/lib/minikube/build/build.4144872376 --cgroup-manager=cgroupfs: (1.556198759s)
I1217 08:29:09.993181  904788 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4144872376
I1217 08:29:10.006812  904788 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4144872376.tar
I1217 08:29:10.019695  904788 build_images.go:218] Built localhost/my-image:functional-122342 from /tmp/build.4144872376.tar
I1217 08:29:10.019738  904788 build_images.go:134] succeeded building to: functional-122342
I1217 08:29:10.019745  904788 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-122342
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 image load --daemon kicbase/echo-server:functional-122342 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-122342 image load --daemon kicbase/echo-server:functional-122342 --alsologtostderr: (1.085447693s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 image load --daemon kicbase/echo-server:functional-122342 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-122342
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 image load --daemon kicbase/echo-server:functional-122342 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 image save kicbase/echo-server:functional-122342 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 image rm kicbase/echo-server:functional-122342 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-122342
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 image save --daemon kicbase/echo-server:functional-122342 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-122342
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-122342 service list: (1.224201236s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-122342 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-122342 service list -o json: (1.214490302s)
functional_test.go:1504: Took "1.214599026s" to run "out/minikube-linux-amd64 -p functional-122342 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.21s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-122342
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-122342
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-122342
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22182-893359/.minikube/files/etc/test/nested/copy/897277/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (51.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-452472 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-452472 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (51.31834859s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (51.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (123.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1217 08:36:27.291124  897277 config.go:182] Loaded profile config "functional-452472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-452472 --alsologtostderr -v=8
E1217 08:37:45.370756  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-452472 --alsologtostderr -v=8: (2m3.873820382s)
functional_test.go:678: soft start took 2m3.874207761s for "functional-452472" cluster.
I1217 08:38:31.165314  897277 config.go:182] Loaded profile config "functional-452472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (123.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-452472 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-452472 cache add registry.k8s.io/pause:3.1: (1.066564437s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-452472 cache add registry.k8s.io/pause:3.3: (1.10361663s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-452472 cache add registry.k8s.io/pause:latest: (1.191011708s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-452472 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC1548115513/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 cache add minikube-local-cache-test:functional-452472
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-452472 cache add minikube-local-cache-test:functional-452472: (1.104363639s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 cache delete minikube-local-cache-test:functional-452472
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-452472
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452472 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (178.391945ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 kubectl -- --context functional-452472 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-452472 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (30.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-452472 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 08:39:08.433248  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-452472 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (30.765513061s)
functional_test.go:776: restart took 30.765616429s for "functional-452472" cluster.
I1217 08:39:09.046897  897277 config.go:182] Loaded profile config "functional-452472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (30.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-452472 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-452472 logs: (1.381949093s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi1046259726/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-452472 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi1046259726/001/logs.txt: (1.414564583s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-452472 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-452472
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-452472: exit status 115 (235.046136ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.226:32335 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-452472 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452472 config get cpus: exit status 14 (70.773838ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452472 config get cpus: exit status 14 (73.32251ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-452472 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-452472 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (124.145526ms)

                                                
                                                
-- stdout --
	* [functional-452472] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 08:40:33.460530  909259 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:40:33.460640  909259 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:40:33.460646  909259 out.go:374] Setting ErrFile to fd 2...
	I1217 08:40:33.460652  909259 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:40:33.460872  909259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 08:40:33.461320  909259 out.go:368] Setting JSON to false
	I1217 08:40:33.462201  909259 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12179,"bootTime":1765948654,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:40:33.462260  909259 start.go:143] virtualization: kvm guest
	I1217 08:40:33.464193  909259 out.go:179] * [functional-452472] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 08:40:33.465877  909259 notify.go:221] Checking for updates...
	I1217 08:40:33.465899  909259 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:40:33.467474  909259 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:40:33.468752  909259 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	I1217 08:40:33.469916  909259 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 08:40:33.471126  909259 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:40:33.472406  909259 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:40:33.474148  909259 config.go:182] Loaded profile config "functional-452472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:40:33.474628  909259 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:40:33.511231  909259 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 08:40:33.512187  909259 start.go:309] selected driver: kvm2
	I1217 08:40:33.512200  909259 start.go:927] validating driver "kvm2" against &{Name:functional-452472 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-452472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.226 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:40:33.512309  909259 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:40:33.514031  909259 out.go:203] 
	W1217 08:40:33.515065  909259 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 08:40:33.515999  909259 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-452472 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-452472 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-452472 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (117.609339ms)

                                                
                                                
-- stdout --
	* [functional-452472] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 08:40:33.874428  909302 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:40:33.874739  909302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:40:33.874750  909302 out.go:374] Setting ErrFile to fd 2...
	I1217 08:40:33.874755  909302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:40:33.875044  909302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 08:40:33.875537  909302 out.go:368] Setting JSON to false
	I1217 08:40:33.876605  909302 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12180,"bootTime":1765948654,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 08:40:33.876662  909302 start.go:143] virtualization: kvm guest
	I1217 08:40:33.878686  909302 out.go:179] * [functional-452472] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 08:40:33.880166  909302 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 08:40:33.880164  909302 notify.go:221] Checking for updates...
	I1217 08:40:33.881527  909302 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 08:40:33.882780  909302 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	I1217 08:40:33.884018  909302 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 08:40:33.885026  909302 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 08:40:33.886072  909302 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 08:40:33.887822  909302 config.go:182] Loaded profile config "functional-452472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 08:40:33.888499  909302 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 08:40:33.919391  909302 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1217 08:40:33.920611  909302 start.go:309] selected driver: kvm2
	I1217 08:40:33.920623  909302 start.go:927] validating driver "kvm2" against &{Name:functional-452472 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-452472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.226 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 08:40:33.920711  909302 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 08:40:33.922412  909302 out.go:203] 
	W1217 08:40:33.923375  909302 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 08:40:33.924380  909302 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (0.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (0.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (82.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [cc28e214-79dc-4410-9e19-5e01dc8c177e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003252892s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-452472 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-452472 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-452472 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-452472 apply -f testdata/storage-provisioner/pod.yaml
I1217 08:39:23.074688  897277 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [6c306e43-6951-420c-9bc5-841a32473efd] Pending
helpers_test.go:353: "sp-pod" [6c306e43-6951-420c-9bc5-841a32473efd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [6c306e43-6951-420c-9bc5-841a32473efd] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 1m8.004317391s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-452472 exec sp-pod -- touch /tmp/mount/foo
E1217 08:40:31.144626  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-452472 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-452472 delete -f testdata/storage-provisioner/pod.yaml: (1.481564609s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-452472 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [0cb0b118-a214-4183-94c8-217df6984d7e] Pending
helpers_test.go:353: "sp-pod" [0cb0b118-a214-4183-94c8-217df6984d7e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003703707s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-452472 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (82.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh -n functional-452472 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 cp functional-452472:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm1873718041/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh -n functional-452472 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh -n functional-452472 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (92.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-452472 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-tv9dk" [8efc52d2-890d-4c37-babf-ec218c8544df] Pending
helpers_test.go:353: "mysql-7d7b65bc95-tv9dk" [8efc52d2-890d-4c37-babf-ec218c8544df] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E1217 08:40:38.828433  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "mysql-7d7b65bc95-tv9dk" [8efc52d2-890d-4c37-babf-ec218c8544df] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: app=mysql healthy within 1m28.004939581s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-452472 exec mysql-7d7b65bc95-tv9dk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-452472 exec mysql-7d7b65bc95-tv9dk -- mysql -ppassword -e "show databases;": exit status 1 (162.374467ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 08:42:02.261490  897277 retry.go:31] will retry after 752.426902ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-452472 exec mysql-7d7b65bc95-tv9dk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-452472 exec mysql-7d7b65bc95-tv9dk -- mysql -ppassword -e "show databases;": exit status 1 (172.834776ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 08:42:03.187127  897277 retry.go:31] will retry after 800.214039ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-452472 exec mysql-7d7b65bc95-tv9dk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-452472 exec mysql-7d7b65bc95-tv9dk -- mysql -ppassword -e "show databases;": exit status 1 (136.05636ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 08:42:04.124193  897277 retry.go:31] will retry after 2.394544702s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-452472 exec mysql-7d7b65bc95-tv9dk -- mysql -ppassword -e "show databases;"
E1217 08:42:45.365539  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:43:12.436658  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:45:28.574578  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:45:56.278595  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:47:45.365517  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (92.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/897277/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh "sudo cat /etc/test/nested/copy/897277/hosts"
E1217 08:40:33.706357  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/897277.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh "sudo cat /etc/ssl/certs/897277.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/897277.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh "sudo cat /usr/share/ca-certificates/897277.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/8972772.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh "sudo cat /etc/ssl/certs/8972772.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/8972772.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh "sudo cat /usr/share/ca-certificates/8972772.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-452472 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452472 ssh "sudo systemctl is-active docker": exit status 1 (159.399037ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452472 ssh "sudo systemctl is-active containerd": exit status 1 (172.091761ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "242.200487ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "63.222512ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "231.826274ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "67.236711ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (63.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-452472 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3408977949/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765960759027764435" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3408977949/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765960759027764435" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3408977949/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765960759027764435" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3408977949/001/test-1765960759027764435
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452472 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (153.478515ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 08:39:19.181609  897277 retry.go:31] will retry after 519.597484ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 08:39 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 08:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 08:39 test-1765960759027764435
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh cat /mount-9p/test-1765960759027764435
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-452472 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [096ac532-94e3-4c84-834b-a3749b9fc71c] Pending
helpers_test.go:353: "busybox-mount" [096ac532-94e3-4c84-834b-a3749b9fc71c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [096ac532-94e3-4c84-834b-a3749b9fc71c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [096ac532-94e3-4c84-834b-a3749b9fc71c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 1m2.003720657s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-452472 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-452472 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3408977949/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (63.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-452472 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun128220715/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452472 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (187.614029ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 08:40:23.164303  897277 retry.go:31] will retry after 269.099322ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-452472 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun128220715/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452472 ssh "sudo umount -f /mount-9p": exit status 1 (191.201465ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-452472 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-452472 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun128220715/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-452472 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3689486237/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-452472 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3689486237/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-452472 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3689486237/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452472 ssh "findmnt -T" /mount1: exit status 1 (222.925852ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 08:40:24.453539  897277 retry.go:31] will retry after 375.870131ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-452472 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-452472 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3689486237/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-452472 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3689486237/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-452472 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3689486237/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-452472 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-452472
localhost/kicbase/echo-server:functional-452472
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-452472 image ls --format short --alsologtostderr:
I1217 08:40:42.898292  909466 out.go:360] Setting OutFile to fd 1 ...
I1217 08:40:42.898610  909466 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:40:42.898621  909466 out.go:374] Setting ErrFile to fd 2...
I1217 08:40:42.898626  909466 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:40:42.898885  909466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
I1217 08:40:42.899493  909466 config.go:182] Loaded profile config "functional-452472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:40:42.899628  909466 config.go:182] Loaded profile config "functional-452472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:40:42.901694  909466 ssh_runner.go:195] Run: systemctl --version
I1217 08:40:42.903989  909466 main.go:143] libmachine: domain functional-452472 has defined MAC address 52:54:00:92:6e:5e in network mk-functional-452472
I1217 08:40:42.904394  909466 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:92:6e:5e", ip: ""} in network mk-functional-452472: {Iface:virbr1 ExpiryTime:2025-12-17 09:35:51 +0000 UTC Type:0 Mac:52:54:00:92:6e:5e Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:functional-452472 Clientid:01:52:54:00:92:6e:5e}
I1217 08:40:42.904424  909466 main.go:143] libmachine: domain functional-452472 has defined IP address 192.168.39.226 and MAC address 52:54:00:92:6e:5e in network mk-functional-452472
I1217 08:40:42.904614  909466 sshutil.go:56] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/functional-452472/id_rsa Username:docker}
I1217 08:40:42.987543  909466 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-452472 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/kicbase/echo-server           │ functional-452472  │ 9056ab77afb8e │ 4.94MB │
│ localhost/minikube-local-cache-test     │ functional-452472  │ e1c7d37f901be │ 3.33kB │
│ localhost/my-image                      │ functional-452472  │ f579d6fc39753 │ 1.47MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-rc.1       │ 5032a56602e1b │ 76.9MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/etcd                    │ 3.6.6-0            │ 0a108f7189562 │ 63.6MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-rc.1       │ af0321f3a4f38 │ 72MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-rc.1       │ 58865405a13bc │ 90.8MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-rc.1       │ 73f80cdc073da │ 52.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-452472 image ls --format table --alsologtostderr:
I1217 08:40:45.576660  909532 out.go:360] Setting OutFile to fd 1 ...
I1217 08:40:45.576753  909532 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:40:45.576758  909532 out.go:374] Setting ErrFile to fd 2...
I1217 08:40:45.576762  909532 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:40:45.576989  909532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
I1217 08:40:45.577498  909532 config.go:182] Loaded profile config "functional-452472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:40:45.577600  909532 config.go:182] Loaded profile config "functional-452472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:40:45.579649  909532 ssh_runner.go:195] Run: systemctl --version
I1217 08:40:45.581807  909532 main.go:143] libmachine: domain functional-452472 has defined MAC address 52:54:00:92:6e:5e in network mk-functional-452472
I1217 08:40:45.582194  909532 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:92:6e:5e", ip: ""} in network mk-functional-452472: {Iface:virbr1 ExpiryTime:2025-12-17 09:35:51 +0000 UTC Type:0 Mac:52:54:00:92:6e:5e Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:functional-452472 Clientid:01:52:54:00:92:6e:5e}
I1217 08:40:45.582225  909532 main.go:143] libmachine: domain functional-452472 has defined IP address 192.168.39.226 and MAC address 52:54:00:92:6e:5e in network mk-functional-452472
I1217 08:40:45.582371  909532 sshutil.go:56] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/functional-452472/id_rsa Username:docker}
I1217 08:40:45.666383  909532 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-452472 image ls --format json --alsologtostderr:
[{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"f579d6fc397530a2a536986ea7196283aa6b54aed305b49e175cebae8b46fc3d","repoDigests":["localhost/my-image@sha256:2c0515cb395bb08c970535c51ae6c9237dcfbdce4f4f3670a7069815ac30747c"],"r
epoTags":["localhost/my-image:functional-452472"],"size":"1468600"},{"id":"58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce","repoDigests":["registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f","registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"90844140"},{"id":"5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98","registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"76893010"},{"id":"af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec5
2e835c2b48d9","registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"71986585"},{"id":"73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636","registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"52763474"},{"id":"ea4a97be40b3768353a27b6c13e720d0a8a85aea8f28de30fed8d395358fc13e","repoDigests":["docker.io/library/b60a78219e709e3c163dfb0ce1144e8d2513527455189446d60ad80271c458bb-tmp@sha256:0080edb9bfdf289c9c449ad73d18a30106c83df7b34f714e0293929908f8157e"],"repoTags":[],"size":"1466017"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2
c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a","registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"63582405"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:848
05ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d
10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-452472"],"size":"4943877"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"e1c7d37f901bec57e9640f4323b5bde0929d5e22f0f069c32ba970d569d6b8b0","repoDigests":["localhost/minikube-local-cache-test@sha256:778c23b9bfa9b6c96d0e8f3faeb9
2bb921972a5397fdf81a9295da61f932dcf1"],"repoTags":["localhost/minikube-local-cache-test:functional-452472"],"size":"3330"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-452472 image ls --format json --alsologtostderr:
I1217 08:40:45.378751  909521 out.go:360] Setting OutFile to fd 1 ...
I1217 08:40:45.379028  909521 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:40:45.379038  909521 out.go:374] Setting ErrFile to fd 2...
I1217 08:40:45.379043  909521 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:40:45.379233  909521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
I1217 08:40:45.379794  909521 config.go:182] Loaded profile config "functional-452472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:40:45.379881  909521 config.go:182] Loaded profile config "functional-452472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:40:45.382021  909521 ssh_runner.go:195] Run: systemctl --version
I1217 08:40:45.384365  909521 main.go:143] libmachine: domain functional-452472 has defined MAC address 52:54:00:92:6e:5e in network mk-functional-452472
I1217 08:40:45.384806  909521 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:92:6e:5e", ip: ""} in network mk-functional-452472: {Iface:virbr1 ExpiryTime:2025-12-17 09:35:51 +0000 UTC Type:0 Mac:52:54:00:92:6e:5e Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:functional-452472 Clientid:01:52:54:00:92:6e:5e}
I1217 08:40:45.384825  909521 main.go:143] libmachine: domain functional-452472 has defined IP address 192.168.39.226 and MAC address 52:54:00:92:6e:5e in network mk-functional-452472
I1217 08:40:45.384978  909521 sshutil.go:56] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/functional-452472/id_rsa Username:docker}
I1217 08:40:45.469255  909521 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-452472 image ls --format yaml --alsologtostderr:
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: 58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f
- registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "90844140"
- id: 73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636
- registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "52763474"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9
- registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "71986585"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-452472
size: "4943877"
- id: e1c7d37f901bec57e9640f4323b5bde0929d5e22f0f069c32ba970d569d6b8b0
repoDigests:
- localhost/minikube-local-cache-test@sha256:778c23b9bfa9b6c96d0e8f3faeb92bb921972a5397fdf81a9295da61f932dcf1
repoTags:
- localhost/minikube-local-cache-test:functional-452472
size: "3330"
- id: 5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98
- registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "76893010"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "63582405"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-452472 image ls --format yaml --alsologtostderr:
I1217 08:40:43.094489  909477 out.go:360] Setting OutFile to fd 1 ...
I1217 08:40:43.094788  909477 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:40:43.094805  909477 out.go:374] Setting ErrFile to fd 2...
I1217 08:40:43.094813  909477 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:40:43.095120  909477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
I1217 08:40:43.095994  909477 config.go:182] Loaded profile config "functional-452472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:40:43.096142  909477 config.go:182] Loaded profile config "functional-452472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:40:43.098654  909477 ssh_runner.go:195] Run: systemctl --version
I1217 08:40:43.100872  909477 main.go:143] libmachine: domain functional-452472 has defined MAC address 52:54:00:92:6e:5e in network mk-functional-452472
I1217 08:40:43.101234  909477 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:92:6e:5e", ip: ""} in network mk-functional-452472: {Iface:virbr1 ExpiryTime:2025-12-17 09:35:51 +0000 UTC Type:0 Mac:52:54:00:92:6e:5e Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:functional-452472 Clientid:01:52:54:00:92:6e:5e}
I1217 08:40:43.101259  909477 main.go:143] libmachine: domain functional-452472 has defined IP address 192.168.39.226 and MAC address 52:54:00:92:6e:5e in network mk-functional-452472
I1217 08:40:43.101391  909477 sshutil.go:56] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/functional-452472/id_rsa Username:docker}
I1217 08:40:43.183841  909477 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (2.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-452472 ssh pgrep buildkitd: exit status 1 (168.381906ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 image build -t localhost/my-image:functional-452472 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-452472 image build -t localhost/my-image:functional-452472 testdata/build --alsologtostderr: (1.724307174s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-452472 image build -t localhost/my-image:functional-452472 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ea4a97be40b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-452472
--> f579d6fc397
Successfully tagged localhost/my-image:functional-452472
f579d6fc397530a2a536986ea7196283aa6b54aed305b49e175cebae8b46fc3d
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-452472 image build -t localhost/my-image:functional-452472 testdata/build --alsologtostderr:
I1217 08:40:43.454751  909499 out.go:360] Setting OutFile to fd 1 ...
I1217 08:40:43.454986  909499 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:40:43.454995  909499 out.go:374] Setting ErrFile to fd 2...
I1217 08:40:43.454998  909499 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:40:43.455178  909499 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
I1217 08:40:43.455731  909499 config.go:182] Loaded profile config "functional-452472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:40:43.456329  909499 config.go:182] Loaded profile config "functional-452472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 08:40:43.458210  909499 ssh_runner.go:195] Run: systemctl --version
I1217 08:40:43.460166  909499 main.go:143] libmachine: domain functional-452472 has defined MAC address 52:54:00:92:6e:5e in network mk-functional-452472
I1217 08:40:43.460479  909499 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:92:6e:5e", ip: ""} in network mk-functional-452472: {Iface:virbr1 ExpiryTime:2025-12-17 09:35:51 +0000 UTC Type:0 Mac:52:54:00:92:6e:5e Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:functional-452472 Clientid:01:52:54:00:92:6e:5e}
I1217 08:40:43.460503  909499 main.go:143] libmachine: domain functional-452472 has defined IP address 192.168.39.226 and MAC address 52:54:00:92:6e:5e in network mk-functional-452472
I1217 08:40:43.460631  909499 sshutil.go:56] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/functional-452472/id_rsa Username:docker}
I1217 08:40:43.543750  909499 build_images.go:162] Building image from path: /tmp/build.3345539245.tar
I1217 08:40:43.543860  909499 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 08:40:43.560167  909499 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3345539245.tar
I1217 08:40:43.565132  909499 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3345539245.tar: stat -c "%s %y" /var/lib/minikube/build/build.3345539245.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3345539245.tar': No such file or directory
I1217 08:40:43.565169  909499 ssh_runner.go:362] scp /tmp/build.3345539245.tar --> /var/lib/minikube/build/build.3345539245.tar (3072 bytes)
I1217 08:40:43.602901  909499 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3345539245
I1217 08:40:43.616696  909499 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3345539245 -xf /var/lib/minikube/build/build.3345539245.tar
I1217 08:40:43.628193  909499 crio.go:315] Building image: /var/lib/minikube/build/build.3345539245
I1217 08:40:43.628263  909499 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-452472 /var/lib/minikube/build/build.3345539245 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1217 08:40:45.082993  909499 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-452472 /var/lib/minikube/build/build.3345539245 --cgroup-manager=cgroupfs: (1.454703507s)
I1217 08:40:45.083085  909499 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3345539245
I1217 08:40:45.096701  909499 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3345539245.tar
I1217 08:40:45.110192  909499 build_images.go:218] Built localhost/my-image:functional-452472 from /tmp/build.3345539245.tar
I1217 08:40:45.110241  909499 build_images.go:134] succeeded building to: functional-452472
I1217 08:40:45.110256  909499 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (2.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-452472
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 image load --daemon kicbase/echo-server:functional-452472 --alsologtostderr
E1217 08:40:28.574762  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:40:28.581272  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:40:28.592701  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:40:28.614127  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:40:28.655582  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-452472 image load --daemon kicbase/echo-server:functional-452472 --alsologtostderr: (1.072144098s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 image ls
E1217 08:40:28.737105  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 image load --daemon kicbase/echo-server:functional-452472 --alsologtostderr
E1217 08:40:28.898909  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:40:29.221009  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
E1217 08:40:29.862959  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-452472
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 image load --daemon kicbase/echo-server:functional-452472 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 image save kicbase/echo-server:functional-452472 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 image rm kicbase/echo-server:functional-452472 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-452472
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 image save --daemon kicbase/echo-server:functional-452472 --alsologtostderr
I1217 08:40:32.833127  897277 detect.go:223] nested VM detected
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-452472
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 update-context --alsologtostderr -v=2
E1217 08:40:49.070169  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:41:09.552037  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:41:50.514363  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-452472 service list: (1.200827366s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-452472 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-452472 service list -o json: (1.233385522s)
functional_test.go:1504: Took "1.233477725s" to run "out/minikube-linux-amd64 -p functional-452472 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-452472
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-452472
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-452472
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (187.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1217 08:50:28.574520  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-371170 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m6.957434903s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (187.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-371170 kubectl -- rollout status deployment/busybox: (3.015505652s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 kubectl -- exec busybox-7b57f96db7-58rpk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 kubectl -- exec busybox-7b57f96db7-r4vk6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 kubectl -- exec busybox-7b57f96db7-xjkt2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 kubectl -- exec busybox-7b57f96db7-58rpk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 kubectl -- exec busybox-7b57f96db7-r4vk6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 kubectl -- exec busybox-7b57f96db7-xjkt2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 kubectl -- exec busybox-7b57f96db7-58rpk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 kubectl -- exec busybox-7b57f96db7-r4vk6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 kubectl -- exec busybox-7b57f96db7-xjkt2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 kubectl -- exec busybox-7b57f96db7-58rpk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 kubectl -- exec busybox-7b57f96db7-58rpk -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 kubectl -- exec busybox-7b57f96db7-r4vk6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 kubectl -- exec busybox-7b57f96db7-r4vk6 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 kubectl -- exec busybox-7b57f96db7-xjkt2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 kubectl -- exec busybox-7b57f96db7-xjkt2 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (42.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 node add --alsologtostderr -v 5
E1217 08:52:45.365032  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-371170 node add --alsologtostderr -v 5: (42.266138352s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (42.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-371170 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 cp testdata/cp-test.txt ha-371170:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 cp ha-371170:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2395871347/001/cp-test_ha-371170.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 cp ha-371170:/home/docker/cp-test.txt ha-371170-m02:/home/docker/cp-test_ha-371170_ha-371170-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m02 "sudo cat /home/docker/cp-test_ha-371170_ha-371170-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 cp ha-371170:/home/docker/cp-test.txt ha-371170-m03:/home/docker/cp-test_ha-371170_ha-371170-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m03 "sudo cat /home/docker/cp-test_ha-371170_ha-371170-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 cp ha-371170:/home/docker/cp-test.txt ha-371170-m04:/home/docker/cp-test_ha-371170_ha-371170-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m04 "sudo cat /home/docker/cp-test_ha-371170_ha-371170-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 cp testdata/cp-test.txt ha-371170-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 cp ha-371170-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2395871347/001/cp-test_ha-371170-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 cp ha-371170-m02:/home/docker/cp-test.txt ha-371170:/home/docker/cp-test_ha-371170-m02_ha-371170.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170 "sudo cat /home/docker/cp-test_ha-371170-m02_ha-371170.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 cp ha-371170-m02:/home/docker/cp-test.txt ha-371170-m03:/home/docker/cp-test_ha-371170-m02_ha-371170-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m03 "sudo cat /home/docker/cp-test_ha-371170-m02_ha-371170-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 cp ha-371170-m02:/home/docker/cp-test.txt ha-371170-m04:/home/docker/cp-test_ha-371170-m02_ha-371170-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m04 "sudo cat /home/docker/cp-test_ha-371170-m02_ha-371170-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 cp testdata/cp-test.txt ha-371170-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 cp ha-371170-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2395871347/001/cp-test_ha-371170-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 cp ha-371170-m03:/home/docker/cp-test.txt ha-371170:/home/docker/cp-test_ha-371170-m03_ha-371170.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170 "sudo cat /home/docker/cp-test_ha-371170-m03_ha-371170.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 cp ha-371170-m03:/home/docker/cp-test.txt ha-371170-m02:/home/docker/cp-test_ha-371170-m03_ha-371170-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m02 "sudo cat /home/docker/cp-test_ha-371170-m03_ha-371170-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 cp ha-371170-m03:/home/docker/cp-test.txt ha-371170-m04:/home/docker/cp-test_ha-371170-m03_ha-371170-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m04 "sudo cat /home/docker/cp-test_ha-371170-m03_ha-371170-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 cp testdata/cp-test.txt ha-371170-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 cp ha-371170-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2395871347/001/cp-test_ha-371170-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 cp ha-371170-m04:/home/docker/cp-test.txt ha-371170:/home/docker/cp-test_ha-371170-m04_ha-371170.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170 "sudo cat /home/docker/cp-test_ha-371170-m04_ha-371170.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 cp ha-371170-m04:/home/docker/cp-test.txt ha-371170-m02:/home/docker/cp-test_ha-371170-m04_ha-371170-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m02 "sudo cat /home/docker/cp-test_ha-371170-m04_ha-371170-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 cp ha-371170-m04:/home/docker/cp-test.txt ha-371170-m03:/home/docker/cp-test_ha-371170-m04_ha-371170-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 ssh -n ha-371170-m03 "sudo cat /home/docker/cp-test_ha-371170-m04_ha-371170-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-371170 node stop m02 --alsologtostderr -v 5: (3.004186166s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-371170 status --alsologtostderr -v 5: exit status 7 (485.24171ms)

                                                
                                                
-- stdout --
	ha-371170
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371170-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-371170-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-371170-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 08:53:34.984281  914191 out.go:360] Setting OutFile to fd 1 ...
	I1217 08:53:34.984549  914191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:53:34.984557  914191 out.go:374] Setting ErrFile to fd 2...
	I1217 08:53:34.984561  914191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 08:53:34.984741  914191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 08:53:34.984913  914191 out.go:368] Setting JSON to false
	I1217 08:53:34.984943  914191 mustload.go:66] Loading cluster: ha-371170
	I1217 08:53:34.985042  914191 notify.go:221] Checking for updates...
	I1217 08:53:34.985294  914191 config.go:182] Loaded profile config "ha-371170": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 08:53:34.985310  914191 status.go:174] checking status of ha-371170 ...
	I1217 08:53:34.987496  914191 status.go:371] ha-371170 host status = "Running" (err=<nil>)
	I1217 08:53:34.987528  914191 host.go:66] Checking if "ha-371170" exists ...
	I1217 08:53:34.990497  914191 main.go:143] libmachine: domain ha-371170 has defined MAC address 52:54:00:90:cd:42 in network mk-ha-371170
	I1217 08:53:34.990981  914191 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:90:cd:42", ip: ""} in network mk-ha-371170: {Iface:virbr1 ExpiryTime:2025-12-17 09:49:38 +0000 UTC Type:0 Mac:52:54:00:90:cd:42 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-371170 Clientid:01:52:54:00:90:cd:42}
	I1217 08:53:34.991010  914191 main.go:143] libmachine: domain ha-371170 has defined IP address 192.168.39.17 and MAC address 52:54:00:90:cd:42 in network mk-ha-371170
	I1217 08:53:34.991132  914191 host.go:66] Checking if "ha-371170" exists ...
	I1217 08:53:34.991326  914191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:53:34.993443  914191 main.go:143] libmachine: domain ha-371170 has defined MAC address 52:54:00:90:cd:42 in network mk-ha-371170
	I1217 08:53:34.993850  914191 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:90:cd:42", ip: ""} in network mk-ha-371170: {Iface:virbr1 ExpiryTime:2025-12-17 09:49:38 +0000 UTC Type:0 Mac:52:54:00:90:cd:42 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-371170 Clientid:01:52:54:00:90:cd:42}
	I1217 08:53:34.993891  914191 main.go:143] libmachine: domain ha-371170 has defined IP address 192.168.39.17 and MAC address 52:54:00:90:cd:42 in network mk-ha-371170
	I1217 08:53:34.994042  914191 sshutil.go:56] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/ha-371170/id_rsa Username:docker}
	I1217 08:53:35.077957  914191 ssh_runner.go:195] Run: systemctl --version
	I1217 08:53:35.084634  914191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:53:35.102024  914191 kubeconfig.go:125] found "ha-371170" server: "https://192.168.39.254:8443"
	I1217 08:53:35.102070  914191 api_server.go:166] Checking apiserver status ...
	I1217 08:53:35.102113  914191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:53:35.122174  914191 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup
	W1217 08:53:35.133697  914191 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 08:53:35.133744  914191 ssh_runner.go:195] Run: ls
	I1217 08:53:35.140361  914191 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1217 08:53:35.147140  914191 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1217 08:53:35.147162  914191 status.go:463] ha-371170 apiserver status = Running (err=<nil>)
	I1217 08:53:35.147172  914191 status.go:176] ha-371170 status: &{Name:ha-371170 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 08:53:35.147190  914191 status.go:174] checking status of ha-371170-m02 ...
	I1217 08:53:35.148983  914191 status.go:371] ha-371170-m02 host status = "Stopped" (err=<nil>)
	I1217 08:53:35.148998  914191 status.go:384] host is not running, skipping remaining checks
	I1217 08:53:35.149005  914191 status.go:176] ha-371170-m02 status: &{Name:ha-371170-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 08:53:35.149026  914191 status.go:174] checking status of ha-371170-m03 ...
	I1217 08:53:35.150137  914191 status.go:371] ha-371170-m03 host status = "Running" (err=<nil>)
	I1217 08:53:35.150152  914191 host.go:66] Checking if "ha-371170-m03" exists ...
	I1217 08:53:35.152382  914191 main.go:143] libmachine: domain ha-371170-m03 has defined MAC address 52:54:00:2d:b9:02 in network mk-ha-371170
	I1217 08:53:35.152797  914191 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2d:b9:02", ip: ""} in network mk-ha-371170: {Iface:virbr1 ExpiryTime:2025-12-17 09:51:33 +0000 UTC Type:0 Mac:52:54:00:2d:b9:02 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-371170-m03 Clientid:01:52:54:00:2d:b9:02}
	I1217 08:53:35.152828  914191 main.go:143] libmachine: domain ha-371170-m03 has defined IP address 192.168.39.74 and MAC address 52:54:00:2d:b9:02 in network mk-ha-371170
	I1217 08:53:35.153002  914191 host.go:66] Checking if "ha-371170-m03" exists ...
	I1217 08:53:35.153222  914191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:53:35.155737  914191 main.go:143] libmachine: domain ha-371170-m03 has defined MAC address 52:54:00:2d:b9:02 in network mk-ha-371170
	I1217 08:53:35.156165  914191 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2d:b9:02", ip: ""} in network mk-ha-371170: {Iface:virbr1 ExpiryTime:2025-12-17 09:51:33 +0000 UTC Type:0 Mac:52:54:00:2d:b9:02 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-371170-m03 Clientid:01:52:54:00:2d:b9:02}
	I1217 08:53:35.156189  914191 main.go:143] libmachine: domain ha-371170-m03 has defined IP address 192.168.39.74 and MAC address 52:54:00:2d:b9:02 in network mk-ha-371170
	I1217 08:53:35.156304  914191 sshutil.go:56] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/ha-371170-m03/id_rsa Username:docker}
	I1217 08:53:35.234708  914191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:53:35.257731  914191 kubeconfig.go:125] found "ha-371170" server: "https://192.168.39.254:8443"
	I1217 08:53:35.257768  914191 api_server.go:166] Checking apiserver status ...
	I1217 08:53:35.257840  914191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 08:53:35.277653  914191 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1798/cgroup
	W1217 08:53:35.289634  914191 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1798/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 08:53:35.289709  914191 ssh_runner.go:195] Run: ls
	I1217 08:53:35.294994  914191 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1217 08:53:35.299673  914191 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1217 08:53:35.299695  914191 status.go:463] ha-371170-m03 apiserver status = Running (err=<nil>)
	I1217 08:53:35.299703  914191 status.go:176] ha-371170-m03 status: &{Name:ha-371170-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 08:53:35.299717  914191 status.go:174] checking status of ha-371170-m04 ...
	I1217 08:53:35.301034  914191 status.go:371] ha-371170-m04 host status = "Running" (err=<nil>)
	I1217 08:53:35.301051  914191 host.go:66] Checking if "ha-371170-m04" exists ...
	I1217 08:53:35.303370  914191 main.go:143] libmachine: domain ha-371170-m04 has defined MAC address 52:54:00:20:19:ff in network mk-ha-371170
	I1217 08:53:35.303884  914191 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:20:19:ff", ip: ""} in network mk-ha-371170: {Iface:virbr1 ExpiryTime:2025-12-17 09:52:53 +0000 UTC Type:0 Mac:52:54:00:20:19:ff Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-371170-m04 Clientid:01:52:54:00:20:19:ff}
	I1217 08:53:35.303913  914191 main.go:143] libmachine: domain ha-371170-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:20:19:ff in network mk-ha-371170
	I1217 08:53:35.304067  914191 host.go:66] Checking if "ha-371170-m04" exists ...
	I1217 08:53:35.304298  914191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 08:53:35.306485  914191 main.go:143] libmachine: domain ha-371170-m04 has defined MAC address 52:54:00:20:19:ff in network mk-ha-371170
	I1217 08:53:35.306913  914191 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:20:19:ff", ip: ""} in network mk-ha-371170: {Iface:virbr1 ExpiryTime:2025-12-17 09:52:53 +0000 UTC Type:0 Mac:52:54:00:20:19:ff Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-371170-m04 Clientid:01:52:54:00:20:19:ff}
	I1217 08:53:35.306937  914191 main.go:143] libmachine: domain ha-371170-m04 has defined IP address 192.168.39.98 and MAC address 52:54:00:20:19:ff in network mk-ha-371170
	I1217 08:53:35.307086  914191 sshutil.go:56] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/ha-371170-m04/id_rsa Username:docker}
	I1217 08:53:35.389690  914191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 08:53:35.407383  914191 status.go:176] ha-371170-m04 status: &{Name:ha-371170-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (3.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (24.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-371170 node start m02 --alsologtostderr -v 5: (24.186991983s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (24.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (284.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 stop --alsologtostderr -v 5
E1217 08:54:16.411748  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:54:16.418141  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:54:16.429499  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:54:16.450921  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:54:16.492313  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:54:16.573832  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:54:16.735417  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:54:17.057166  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:54:17.699221  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:54:18.981588  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:54:21.543718  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:54:26.666091  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:54:36.908183  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:54:57.389756  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:55:28.575129  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:55:38.352001  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:55:48.435998  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:56:51.641408  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-371170 stop --alsologtostderr -v 5: (2m53.096605813s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 start --wait true --alsologtostderr -v 5
E1217 08:57:00.274085  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:57:45.365796  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-371170 start --wait true --alsologtostderr -v 5: (1m51.455501839s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (284.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-371170 node delete m03 --alsologtostderr -v 5: (17.506611371s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (243.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 stop --alsologtostderr -v 5
E1217 08:59:16.411077  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:59:44.116229  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:00:28.574350  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:02:45.371321  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-371170 stop --alsologtostderr -v 5: (4m3.786556954s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-371170 status --alsologtostderr -v 5: exit status 7 (66.229691ms)

                                                
                                                
-- stdout --
	ha-371170
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-371170-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-371170-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 09:03:08.811304  917048 out.go:360] Setting OutFile to fd 1 ...
	I1217 09:03:08.811426  917048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:03:08.811437  917048 out.go:374] Setting ErrFile to fd 2...
	I1217 09:03:08.811443  917048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:03:08.811683  917048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 09:03:08.811874  917048 out.go:368] Setting JSON to false
	I1217 09:03:08.811912  917048 mustload.go:66] Loading cluster: ha-371170
	I1217 09:03:08.812026  917048 notify.go:221] Checking for updates...
	I1217 09:03:08.812310  917048 config.go:182] Loaded profile config "ha-371170": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 09:03:08.812330  917048 status.go:174] checking status of ha-371170 ...
	I1217 09:03:08.814329  917048 status.go:371] ha-371170 host status = "Stopped" (err=<nil>)
	I1217 09:03:08.814344  917048 status.go:384] host is not running, skipping remaining checks
	I1217 09:03:08.814350  917048 status.go:176] ha-371170 status: &{Name:ha-371170 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 09:03:08.814369  917048 status.go:174] checking status of ha-371170-m02 ...
	I1217 09:03:08.815720  917048 status.go:371] ha-371170-m02 host status = "Stopped" (err=<nil>)
	I1217 09:03:08.815734  917048 status.go:384] host is not running, skipping remaining checks
	I1217 09:03:08.815740  917048 status.go:176] ha-371170-m02 status: &{Name:ha-371170-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 09:03:08.815753  917048 status.go:174] checking status of ha-371170-m04 ...
	I1217 09:03:08.816740  917048 status.go:371] ha-371170-m04 host status = "Stopped" (err=<nil>)
	I1217 09:03:08.816753  917048 status.go:384] host is not running, skipping remaining checks
	I1217 09:03:08.816758  917048 status.go:176] ha-371170-m04 status: &{Name:ha-371170-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (243.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (75.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1217 09:04:16.410829  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-371170 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m14.902726302s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (75.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 node add --control-plane --alsologtostderr -v 5
E1217 09:05:28.574856  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-371170 node add --control-plane --alsologtostderr -v 5: (1m14.938948971s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-371170 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.47s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-047754 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-047754 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (58.47133466s)
--- PASS: TestJSONOutput/start/Command (58.47s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-047754 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-047754 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-047754 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-047754 --output=json --user=testUser: (6.995898016s)
--- PASS: TestJSONOutput/stop/Command (7.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-698759 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-698759 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (79.187546ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"54cc8d34-f809-480c-9ae4-705306adf7f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-698759] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b270ae11-73a6-4080-803b-7b6c7bb890f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22182"}}
	{"specversion":"1.0","id":"ea039987-45d8-49a7-925f-ffe32b5b8206","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ce890f81-0dc9-428a-b506-ce54b9eb1c33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig"}}
	{"specversion":"1.0","id":"c58d8218-6b0d-4c56-be00-469ee37f8dae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube"}}
	{"specversion":"1.0","id":"e8d97d91-2815-4f37-956a-13e6adf8912f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"11b32898-9d4c-4013-b47b-74b74d07a136","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fbc16010-cc48-4f47-8a97-8f2b1f2b5848","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-698759" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-698759
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (77.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-520563 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-520563 --driver=kvm2  --container-runtime=crio: (37.936875147s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-524165 --driver=kvm2  --container-runtime=crio
E1217 09:07:45.369884  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-524165 --driver=kvm2  --container-runtime=crio: (36.79909716s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-520563
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-524165
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-524165" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-524165
helpers_test.go:176: Cleaning up "first-520563" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-520563
--- PASS: TestMinikubeProfile (77.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (19.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-836289 --memory=3072 --mount-string /tmp/TestMountStartserial3117411678/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-836289 --memory=3072 --mount-string /tmp/TestMountStartserial3117411678/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.40055187s)
--- PASS: TestMountStart/serial/StartWithMountFirst (19.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-836289 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-836289 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (21.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-856756 --memory=3072 --mount-string /tmp/TestMountStartserial3117411678/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-856756 --memory=3072 --mount-string /tmp/TestMountStartserial3117411678/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (20.850408238s)
--- PASS: TestMountStart/serial/StartWithMountSecond (21.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-856756 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-856756 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-836289 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-856756 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-856756 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-856756
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-856756: (1.336218622s)
--- PASS: TestMountStart/serial/Stop (1.34s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (17.99s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-856756
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-856756: (16.992762673s)
--- PASS: TestMountStart/serial/RestartStopped (17.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-856756 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-856756 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (98.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-046714 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1217 09:09:16.411608  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:10:28.575004  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:10:39.478183  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-046714 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m37.844810876s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (98.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-046714 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-046714 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-046714 -- rollout status deployment/busybox: (2.570535212s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-046714 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-046714 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-046714 -- exec busybox-7b57f96db7-9m4w9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-046714 -- exec busybox-7b57f96db7-bddjm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-046714 -- exec busybox-7b57f96db7-9m4w9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-046714 -- exec busybox-7b57f96db7-bddjm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-046714 -- exec busybox-7b57f96db7-9m4w9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-046714 -- exec busybox-7b57f96db7-bddjm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-046714 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-046714 -- exec busybox-7b57f96db7-9m4w9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-046714 -- exec busybox-7b57f96db7-9m4w9 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-046714 -- exec busybox-7b57f96db7-bddjm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-046714 -- exec busybox-7b57f96db7-bddjm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (40.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-046714 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-046714 -v=5 --alsologtostderr: (39.794726121s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (40.23s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-046714 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 cp testdata/cp-test.txt multinode-046714:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 ssh -n multinode-046714 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 cp multinode-046714:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3873326204/001/cp-test_multinode-046714.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 ssh -n multinode-046714 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 cp multinode-046714:/home/docker/cp-test.txt multinode-046714-m02:/home/docker/cp-test_multinode-046714_multinode-046714-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 ssh -n multinode-046714 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 ssh -n multinode-046714-m02 "sudo cat /home/docker/cp-test_multinode-046714_multinode-046714-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 cp multinode-046714:/home/docker/cp-test.txt multinode-046714-m03:/home/docker/cp-test_multinode-046714_multinode-046714-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 ssh -n multinode-046714 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 ssh -n multinode-046714-m03 "sudo cat /home/docker/cp-test_multinode-046714_multinode-046714-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 cp testdata/cp-test.txt multinode-046714-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 ssh -n multinode-046714-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 cp multinode-046714-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3873326204/001/cp-test_multinode-046714-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 ssh -n multinode-046714-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 cp multinode-046714-m02:/home/docker/cp-test.txt multinode-046714:/home/docker/cp-test_multinode-046714-m02_multinode-046714.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 ssh -n multinode-046714-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 ssh -n multinode-046714 "sudo cat /home/docker/cp-test_multinode-046714-m02_multinode-046714.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 cp multinode-046714-m02:/home/docker/cp-test.txt multinode-046714-m03:/home/docker/cp-test_multinode-046714-m02_multinode-046714-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 ssh -n multinode-046714-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 ssh -n multinode-046714-m03 "sudo cat /home/docker/cp-test_multinode-046714-m02_multinode-046714-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 cp testdata/cp-test.txt multinode-046714-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 ssh -n multinode-046714-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 cp multinode-046714-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3873326204/001/cp-test_multinode-046714-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 ssh -n multinode-046714-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 cp multinode-046714-m03:/home/docker/cp-test.txt multinode-046714:/home/docker/cp-test_multinode-046714-m03_multinode-046714.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 ssh -n multinode-046714-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 ssh -n multinode-046714 "sudo cat /home/docker/cp-test_multinode-046714-m03_multinode-046714.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 cp multinode-046714-m03:/home/docker/cp-test.txt multinode-046714-m02:/home/docker/cp-test_multinode-046714-m03_multinode-046714-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 ssh -n multinode-046714-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 ssh -n multinode-046714-m02 "sudo cat /home/docker/cp-test_multinode-046714-m03_multinode-046714-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-046714 node stop m03: (1.573484268s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-046714 status: exit status 7 (325.894909ms)

                                                
                                                
-- stdout --
	multinode-046714
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-046714-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-046714-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-046714 status --alsologtostderr: exit status 7 (325.347238ms)

                                                
                                                
-- stdout --
	multinode-046714
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-046714-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-046714-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 09:11:45.091995  922400 out.go:360] Setting OutFile to fd 1 ...
	I1217 09:11:45.092127  922400 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:11:45.092139  922400 out.go:374] Setting ErrFile to fd 2...
	I1217 09:11:45.092146  922400 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:11:45.092368  922400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 09:11:45.092571  922400 out.go:368] Setting JSON to false
	I1217 09:11:45.092610  922400 mustload.go:66] Loading cluster: multinode-046714
	I1217 09:11:45.092699  922400 notify.go:221] Checking for updates...
	I1217 09:11:45.093019  922400 config.go:182] Loaded profile config "multinode-046714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 09:11:45.093039  922400 status.go:174] checking status of multinode-046714 ...
	I1217 09:11:45.095169  922400 status.go:371] multinode-046714 host status = "Running" (err=<nil>)
	I1217 09:11:45.095188  922400 host.go:66] Checking if "multinode-046714" exists ...
	I1217 09:11:45.098158  922400 main.go:143] libmachine: domain multinode-046714 has defined MAC address 52:54:00:b8:09:c4 in network mk-multinode-046714
	I1217 09:11:45.098643  922400 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:09:c4", ip: ""} in network mk-multinode-046714: {Iface:virbr1 ExpiryTime:2025-12-17 10:09:27 +0000 UTC Type:0 Mac:52:54:00:b8:09:c4 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-046714 Clientid:01:52:54:00:b8:09:c4}
	I1217 09:11:45.098689  922400 main.go:143] libmachine: domain multinode-046714 has defined IP address 192.168.39.35 and MAC address 52:54:00:b8:09:c4 in network mk-multinode-046714
	I1217 09:11:45.098878  922400 host.go:66] Checking if "multinode-046714" exists ...
	I1217 09:11:45.099091  922400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 09:11:45.101391  922400 main.go:143] libmachine: domain multinode-046714 has defined MAC address 52:54:00:b8:09:c4 in network mk-multinode-046714
	I1217 09:11:45.101776  922400 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:09:c4", ip: ""} in network mk-multinode-046714: {Iface:virbr1 ExpiryTime:2025-12-17 10:09:27 +0000 UTC Type:0 Mac:52:54:00:b8:09:c4 Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:multinode-046714 Clientid:01:52:54:00:b8:09:c4}
	I1217 09:11:45.101803  922400 main.go:143] libmachine: domain multinode-046714 has defined IP address 192.168.39.35 and MAC address 52:54:00:b8:09:c4 in network mk-multinode-046714
	I1217 09:11:45.101960  922400 sshutil.go:56] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/multinode-046714/id_rsa Username:docker}
	I1217 09:11:45.188618  922400 ssh_runner.go:195] Run: systemctl --version
	I1217 09:11:45.194721  922400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 09:11:45.210238  922400 kubeconfig.go:125] found "multinode-046714" server: "https://192.168.39.35:8443"
	I1217 09:11:45.210273  922400 api_server.go:166] Checking apiserver status ...
	I1217 09:11:45.210318  922400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 09:11:45.228174  922400 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1401/cgroup
	W1217 09:11:45.239680  922400 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1401/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 09:11:45.239737  922400 ssh_runner.go:195] Run: ls
	I1217 09:11:45.244822  922400 api_server.go:253] Checking apiserver healthz at https://192.168.39.35:8443/healthz ...
	I1217 09:11:45.249701  922400 api_server.go:279] https://192.168.39.35:8443/healthz returned 200:
	ok
	I1217 09:11:45.249727  922400 status.go:463] multinode-046714 apiserver status = Running (err=<nil>)
	I1217 09:11:45.249747  922400 status.go:176] multinode-046714 status: &{Name:multinode-046714 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 09:11:45.249779  922400 status.go:174] checking status of multinode-046714-m02 ...
	I1217 09:11:45.251357  922400 status.go:371] multinode-046714-m02 host status = "Running" (err=<nil>)
	I1217 09:11:45.251375  922400 host.go:66] Checking if "multinode-046714-m02" exists ...
	I1217 09:11:45.253851  922400 main.go:143] libmachine: domain multinode-046714-m02 has defined MAC address 52:54:00:c6:88:40 in network mk-multinode-046714
	I1217 09:11:45.254215  922400 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:88:40", ip: ""} in network mk-multinode-046714: {Iface:virbr1 ExpiryTime:2025-12-17 10:10:23 +0000 UTC Type:0 Mac:52:54:00:c6:88:40 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:multinode-046714-m02 Clientid:01:52:54:00:c6:88:40}
	I1217 09:11:45.254234  922400 main.go:143] libmachine: domain multinode-046714-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:c6:88:40 in network mk-multinode-046714
	I1217 09:11:45.254372  922400 host.go:66] Checking if "multinode-046714-m02" exists ...
	I1217 09:11:45.254595  922400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 09:11:45.256837  922400 main.go:143] libmachine: domain multinode-046714-m02 has defined MAC address 52:54:00:c6:88:40 in network mk-multinode-046714
	I1217 09:11:45.257154  922400 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:88:40", ip: ""} in network mk-multinode-046714: {Iface:virbr1 ExpiryTime:2025-12-17 10:10:23 +0000 UTC Type:0 Mac:52:54:00:c6:88:40 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:multinode-046714-m02 Clientid:01:52:54:00:c6:88:40}
	I1217 09:11:45.257173  922400 main.go:143] libmachine: domain multinode-046714-m02 has defined IP address 192.168.39.234 and MAC address 52:54:00:c6:88:40 in network mk-multinode-046714
	I1217 09:11:45.257285  922400 sshutil.go:56] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-893359/.minikube/machines/multinode-046714-m02/id_rsa Username:docker}
	I1217 09:11:45.338707  922400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 09:11:45.353382  922400 status.go:176] multinode-046714-m02 status: &{Name:multinode-046714-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1217 09:11:45.353415  922400 status.go:174] checking status of multinode-046714-m03 ...
	I1217 09:11:45.354943  922400 status.go:371] multinode-046714-m03 host status = "Stopped" (err=<nil>)
	I1217 09:11:45.354961  922400 status.go:384] host is not running, skipping remaining checks
	I1217 09:11:45.354969  922400 status.go:176] multinode-046714-m03 status: &{Name:multinode-046714-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-046714 node start m03 -v=5 --alsologtostderr: (35.534866483s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (284.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-046714
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-046714
E1217 09:12:28.439282  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:12:45.371221  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:13:31.645063  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:14:16.411289  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-046714: (2m37.345240524s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-046714 --wait=true -v=5 --alsologtostderr
E1217 09:15:28.574983  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-046714 --wait=true -v=5 --alsologtostderr: (2m7.102356256s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-046714
--- PASS: TestMultiNode/serial/RestartKeepsNodes (284.58s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-046714 node delete m03: (2.110323224s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (172.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 stop
E1217 09:17:45.365019  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:19:16.411455  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-046714 stop: (2m52.677453141s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-046714 status: exit status 7 (64.084638ms)

                                                
                                                
-- stdout --
	multinode-046714
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-046714-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-046714 status --alsologtostderr: exit status 7 (62.912595ms)

                                                
                                                
-- stdout --
	multinode-046714
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-046714-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 09:20:01.359745  925191 out.go:360] Setting OutFile to fd 1 ...
	I1217 09:20:01.359993  925191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:20:01.360001  925191 out.go:374] Setting ErrFile to fd 2...
	I1217 09:20:01.360005  925191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:20:01.360228  925191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 09:20:01.360394  925191 out.go:368] Setting JSON to false
	I1217 09:20:01.360422  925191 mustload.go:66] Loading cluster: multinode-046714
	I1217 09:20:01.360551  925191 notify.go:221] Checking for updates...
	I1217 09:20:01.360755  925191 config.go:182] Loaded profile config "multinode-046714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 09:20:01.360769  925191 status.go:174] checking status of multinode-046714 ...
	I1217 09:20:01.362717  925191 status.go:371] multinode-046714 host status = "Stopped" (err=<nil>)
	I1217 09:20:01.362731  925191 status.go:384] host is not running, skipping remaining checks
	I1217 09:20:01.362736  925191 status.go:176] multinode-046714 status: &{Name:multinode-046714 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 09:20:01.362752  925191 status.go:174] checking status of multinode-046714-m02 ...
	I1217 09:20:01.363862  925191 status.go:371] multinode-046714-m02 host status = "Stopped" (err=<nil>)
	I1217 09:20:01.363874  925191 status.go:384] host is not running, skipping remaining checks
	I1217 09:20:01.363879  925191 status.go:176] multinode-046714-m02 status: &{Name:multinode-046714-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (172.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-046714 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1217 09:20:28.574732  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-046714 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m25.838070481s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-046714 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.31s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-046714
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-046714-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-046714-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (78.030011ms)

                                                
                                                
-- stdout --
	* [multinode-046714-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-046714-m02' is duplicated with machine name 'multinode-046714-m02' in profile 'multinode-046714'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-046714-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-046714-m03 --driver=kvm2  --container-runtime=crio: (36.137812038s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-046714
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-046714: exit status 80 (212.545564ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-046714 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-046714-m03 already exists in multinode-046714-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-046714-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.33s)

                                                
                                    
x
+
TestScheduledStopUnix (107.31s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-965529 --memory=3072 --driver=kvm2  --container-runtime=crio
E1217 09:24:16.411345  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-965529 --memory=3072 --driver=kvm2  --container-runtime=crio: (35.736999287s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-965529 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 09:24:39.708616  927426 out.go:360] Setting OutFile to fd 1 ...
	I1217 09:24:39.708726  927426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:24:39.708735  927426 out.go:374] Setting ErrFile to fd 2...
	I1217 09:24:39.708739  927426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:24:39.708901  927426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 09:24:39.709138  927426 out.go:368] Setting JSON to false
	I1217 09:24:39.709222  927426 mustload.go:66] Loading cluster: scheduled-stop-965529
	I1217 09:24:39.709545  927426 config.go:182] Loaded profile config "scheduled-stop-965529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 09:24:39.709614  927426 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/scheduled-stop-965529/config.json ...
	I1217 09:24:39.709794  927426 mustload.go:66] Loading cluster: scheduled-stop-965529
	I1217 09:24:39.709896  927426 config.go:182] Loaded profile config "scheduled-stop-965529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-965529 -n scheduled-stop-965529
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-965529 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 09:24:39.992616  927472 out.go:360] Setting OutFile to fd 1 ...
	I1217 09:24:39.992750  927472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:24:39.992757  927472 out.go:374] Setting ErrFile to fd 2...
	I1217 09:24:39.992762  927472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:24:39.992979  927472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 09:24:39.993215  927472 out.go:368] Setting JSON to false
	I1217 09:24:39.993433  927472 daemonize_unix.go:73] killing process 927461 as it is an old scheduled stop
	I1217 09:24:39.993571  927472 mustload.go:66] Loading cluster: scheduled-stop-965529
	I1217 09:24:39.993951  927472 config.go:182] Loaded profile config "scheduled-stop-965529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 09:24:39.994017  927472 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/scheduled-stop-965529/config.json ...
	I1217 09:24:39.994196  927472 mustload.go:66] Loading cluster: scheduled-stop-965529
	I1217 09:24:39.994287  927472 config.go:182] Loaded profile config "scheduled-stop-965529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1217 09:24:39.999500  897277 retry.go:31] will retry after 76.484µs: open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/scheduled-stop-965529/pid: no such file or directory
I1217 09:24:40.000664  897277 retry.go:31] will retry after 119.593µs: open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/scheduled-stop-965529/pid: no such file or directory
I1217 09:24:40.001812  897277 retry.go:31] will retry after 229.147µs: open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/scheduled-stop-965529/pid: no such file or directory
I1217 09:24:40.002963  897277 retry.go:31] will retry after 240.068µs: open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/scheduled-stop-965529/pid: no such file or directory
I1217 09:24:40.004108  897277 retry.go:31] will retry after 322.466µs: open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/scheduled-stop-965529/pid: no such file or directory
I1217 09:24:40.005236  897277 retry.go:31] will retry after 984.163µs: open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/scheduled-stop-965529/pid: no such file or directory
I1217 09:24:40.006376  897277 retry.go:31] will retry after 636.771µs: open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/scheduled-stop-965529/pid: no such file or directory
I1217 09:24:40.007500  897277 retry.go:31] will retry after 1.175712ms: open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/scheduled-stop-965529/pid: no such file or directory
I1217 09:24:40.009701  897277 retry.go:31] will retry after 2.904558ms: open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/scheduled-stop-965529/pid: no such file or directory
I1217 09:24:40.012904  897277 retry.go:31] will retry after 2.348898ms: open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/scheduled-stop-965529/pid: no such file or directory
I1217 09:24:40.016096  897277 retry.go:31] will retry after 6.473036ms: open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/scheduled-stop-965529/pid: no such file or directory
I1217 09:24:40.023320  897277 retry.go:31] will retry after 7.927936ms: open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/scheduled-stop-965529/pid: no such file or directory
I1217 09:24:40.031530  897277 retry.go:31] will retry after 11.772921ms: open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/scheduled-stop-965529/pid: no such file or directory
I1217 09:24:40.043761  897277 retry.go:31] will retry after 16.740566ms: open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/scheduled-stop-965529/pid: no such file or directory
I1217 09:24:40.061023  897277 retry.go:31] will retry after 28.251654ms: open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/scheduled-stop-965529/pid: no such file or directory
I1217 09:24:40.090305  897277 retry.go:31] will retry after 22.459648ms: open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/scheduled-stop-965529/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-965529 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-965529 -n scheduled-stop-965529
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-965529
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-965529 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 09:25:05.680954  927620 out.go:360] Setting OutFile to fd 1 ...
	I1217 09:25:05.681238  927620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:25:05.681250  927620 out.go:374] Setting ErrFile to fd 2...
	I1217 09:25:05.681254  927620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:25:05.681486  927620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 09:25:05.681787  927620 out.go:368] Setting JSON to false
	I1217 09:25:05.681885  927620 mustload.go:66] Loading cluster: scheduled-stop-965529
	I1217 09:25:05.682206  927620 config.go:182] Loaded profile config "scheduled-stop-965529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 09:25:05.682290  927620 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/scheduled-stop-965529/config.json ...
	I1217 09:25:05.682499  927620 mustload.go:66] Loading cluster: scheduled-stop-965529
	I1217 09:25:05.682644  927620 config.go:182] Loaded profile config "scheduled-stop-965529": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1217 09:25:28.575184  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-965529
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-965529: exit status 7 (63.621303ms)

                                                
                                                
-- stdout --
	scheduled-stop-965529
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-965529 -n scheduled-stop-965529
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-965529 -n scheduled-stop-965529: exit status 7 (61.779122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-965529" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-965529
--- PASS: TestScheduledStopUnix (107.31s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (369.39s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1198545370 start -p running-upgrade-879489 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1198545370 start -p running-upgrade-879489 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m7.612594506s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-879489 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-879489 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (5m0.319764801s)
helpers_test.go:176: Cleaning up "running-upgrade-879489" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-879489
--- PASS: TestRunningBinaryUpgrade (369.39s)

                                                
                                    
x
+
TestKubernetesUpgrade (177.56s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-317314 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-317314 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.123921775s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-317314
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-317314: (2.118468647s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-317314 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-317314 status --format={{.Host}}: exit status 7 (84.062257ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-317314 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-317314 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m9.398332444s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-317314 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-317314 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-317314 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (75.53118ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-317314] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-rc.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-317314
	    minikube start -p kubernetes-upgrade-317314 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3173142 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-317314 --kubernetes-version=v1.35.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-317314 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1217 09:27:45.365480  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-317314 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m6.533092399s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-317314" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-317314
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-317314: (1.172099788s)
--- PASS: TestKubernetesUpgrade (177.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-229767 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-229767 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (105.882186ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-229767] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-229767 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-229767 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m36.152144158s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-229767 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (103.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3106125177 start -p stopped-upgrade-916798 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3106125177 start -p stopped-upgrade-916798 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (56.421620291s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3106125177 -p stopped-upgrade-916798 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3106125177 -p stopped-upgrade-916798 stop: (1.777189103s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-916798 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-916798 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (45.625387182s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (103.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-960765 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-960765 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (162.659622ms)

                                                
                                                
-- stdout --
	* [false-960765] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 09:27:10.004940  929671 out.go:360] Setting OutFile to fd 1 ...
	I1217 09:27:10.005105  929671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:27:10.005147  929671 out.go:374] Setting ErrFile to fd 2...
	I1217 09:27:10.005159  929671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 09:27:10.005496  929671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-893359/.minikube/bin
	I1217 09:27:10.006219  929671 out.go:368] Setting JSON to false
	I1217 09:27:10.007677  929671 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":14976,"bootTime":1765948654,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 09:27:10.007790  929671 start.go:143] virtualization: kvm guest
	I1217 09:27:10.009448  929671 out.go:179] * [false-960765] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 09:27:10.011280  929671 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 09:27:10.011323  929671 notify.go:221] Checking for updates...
	I1217 09:27:10.014120  929671 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 09:27:10.015404  929671 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-893359/kubeconfig
	I1217 09:27:10.016742  929671 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-893359/.minikube
	I1217 09:27:10.017797  929671 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 09:27:10.019239  929671 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 09:27:10.021046  929671 config.go:182] Loaded profile config "NoKubernetes-229767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 09:27:10.021220  929671 config.go:182] Loaded profile config "kubernetes-upgrade-317314": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 09:27:10.021367  929671 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 09:27:10.069588  929671 out.go:179] * Using the kvm2 driver based on user configuration
	I1217 09:27:10.070681  929671 start.go:309] selected driver: kvm2
	I1217 09:27:10.070703  929671 start.go:927] validating driver "kvm2" against <nil>
	I1217 09:27:10.070721  929671 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 09:27:10.072778  929671 out.go:203] 
	W1217 09:27:10.073820  929671 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1217 09:27:10.074766  929671 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-960765 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-960765

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-960765

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-960765

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-960765

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-960765

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-960765

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-960765

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-960765

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-960765

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-960765

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-960765

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-960765" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-960765" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-960765

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-960765"

                                                
                                                
----------------------- debugLogs end: false-960765 [took: 5.605023084s] --------------------------------
helpers_test.go:176: Cleaning up "false-960765" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-960765
--- PASS: TestNetworkPlugins/group/false (5.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (52.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-229767 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-229767 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (51.230914116s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-229767 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-229767 status -o json: exit status 2 (198.075736ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-229767","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-229767
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (52.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (40.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-229767 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-229767 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (40.913664442s)
--- PASS: TestNoKubernetes/serial/Start (40.91s)

                                                
                                    
x
+
TestPause/serial/Start (72.26s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-869559 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-869559 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m12.260460087s)
--- PASS: TestPause/serial/Start (72.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-916798
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-916798: (1.05259326s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22182-893359/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-229767 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-229767 "sudo systemctl is-active --quiet service kubelet": exit status 1 (157.7552ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-229767
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-229767: (1.255535561s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (61.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-229767 --driver=kvm2  --container-runtime=crio
E1217 09:29:08.441441  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:29:16.411564  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-229767 --driver=kvm2  --container-runtime=crio: (1m1.29854511s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (61.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-229767 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-229767 "sudo systemctl is-active --quiet service kubelet": exit status 1 (171.07617ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)
E1217 09:30:11.647039  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:30:28.574270  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestISOImage/Setup (33.29s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-885802 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-885802 --no-kubernetes --driver=kvm2  --container-runtime=crio: (33.285098342s)
--- PASS: TestISOImage/Setup (33.29s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-885802 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-885802 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-885802 ssh "which docker"
E1217 09:39:50.405065  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/enable-default-cni-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:50.567164  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/enable-default-cni-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/Binaries/docker (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-885802 ssh "which git"
E1217 09:39:49.810604  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/custom-flannel-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/Binaries/git (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-885802 ssh "which iptables"
E1217 09:39:50.241471  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/enable-default-cni-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:50.247958  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/enable-default-cni-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:50.259534  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/enable-default-cni-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:50.281123  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/enable-default-cni-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:50.322974  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/enable-default-cni-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/Binaries/iptables (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-885802 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-885802 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-885802 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.23s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-885802 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.23s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.27s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-885802 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.27s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-885802 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (59.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-960765 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-960765 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (59.502903863s)
--- PASS: TestNetworkPlugins/group/auto/Start (59.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (82.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-960765 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-960765 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m22.956420013s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (82.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-960765 "pgrep -a kubelet"
I1217 09:32:24.978394  897277 config.go:182] Loaded profile config "auto-960765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-960765 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-sq5pk" [d891f09e-8f9e-4fca-b377-51be653539ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-sq5pk" [d891f09e-8f9e-4fca-b377-51be653539ab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005203381s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-960765 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-960765 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-960765 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (85.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-960765 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-960765 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m25.509835704s)
--- PASS: TestNetworkPlugins/group/calico/Start (85.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-44r4s" [b9af7bed-40a9-410f-8237-8b5f0c2ffb79] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005411578s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-960765 "pgrep -a kubelet"
I1217 09:32:57.594291  897277 config.go:182] Loaded profile config "kindnet-960765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-960765 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-drptt" [b09973e1-418e-4abb-8110-8850ff8b7e2f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-drptt" [b09973e1-418e-4abb-8110-8850ff8b7e2f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003998555s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-960765 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-960765 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-960765 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-960765 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-960765 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m13.094561787s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-960765 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-960765 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m20.118241718s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (89.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-960765 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1217 09:34:16.411182  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-960765 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m29.386761299s)
--- PASS: TestNetworkPlugins/group/flannel/Start (89.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-7fq55" [da468b9c-ec2e-4b39-92a6-5e2376a18033] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004580617s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-960765 "pgrep -a kubelet"
I1217 09:34:22.674012  897277 config.go:182] Loaded profile config "calico-960765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-960765 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-5pdxq" [068f3ef5-5597-49fc-a0a2-e0a6ca6c200e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-5pdxq" [068f3ef5-5597-49fc-a0a2-e0a6ca6c200e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00517391s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-960765 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-960765 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-960765 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-960765 "pgrep -a kubelet"
I1217 09:34:39.297087  897277 config.go:182] Loaded profile config "custom-flannel-960765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-960765 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-5h4hx" [7339e5d1-a8dd-42fe-92f3-38aa8882b8ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-5h4hx" [7339e5d1-a8dd-42fe-92f3-38aa8882b8ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005071498s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (60.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-960765 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-960765 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m0.186943476s)
--- PASS: TestNetworkPlugins/group/bridge/Start (60.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-960765 "pgrep -a kubelet"
I1217 09:34:49.991155  897277 config.go:182] Loaded profile config "enable-default-cni-960765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-960765 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-wgkzk" [76178c00-61a0-4d6b-8875-d857fd9c055a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-wgkzk" [76178c00-61a0-4d6b-8875-d857fd9c055a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005278953s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-960765 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-960765 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-960765 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-960765 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-960765 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-960765 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (62.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-368667 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-368667 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m2.409582236s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (62.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (90.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-319340 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-319340 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (1m30.006953115s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (90.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-bqkbv" [b612f13a-75ff-495a-9d22-bf14bced45f6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006059309s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-960765 "pgrep -a kubelet"
I1217 09:35:22.341088  897277 config.go:182] Loaded profile config "flannel-960765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-960765 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-vbhwb" [802215ca-70e2-4ed3-8677-3d5eed83b68d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-vbhwb" [802215ca-70e2-4ed3-8677-3d5eed83b68d] Running
E1217 09:35:28.574231  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-122342/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003617025s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-960765 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-960765 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-960765 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-960765 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-960765 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-wv52f" [392a4458-3fef-4bf9-a90d-9ecc32a9329a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-wv52f" [392a4458-3fef-4bf9-a90d-9ecc32a9329a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005293399s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (62.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-875470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-875470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (1m2.943557243s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (62.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-960765 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-960765 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-960765 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-368667 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [dc205252-6f3a-46b2-96b2-0584def724bb] Pending
helpers_test.go:353: "busybox" [dc205252-6f3a-46b2-96b2-0584def724bb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [dc205252-6f3a-46b2-96b2-0584def724bb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.006296672s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-368667 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-229714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-229714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (54.767706087s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-368667 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-368667 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.270662379s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-368667 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (84.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-368667 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-368667 --alsologtostderr -v=3: (1m24.469475371s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (84.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-319340 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [d52ed249-72b7-4f29-98c6-f34b871ef803] Pending
helpers_test.go:353: "busybox" [d52ed249-72b7-4f29-98c6-f34b871ef803] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [d52ed249-72b7-4f29-98c6-f34b871ef803] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004390492s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-319340 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-875470 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [b0d83236-88f1-445f-bae3-ef9228543c6b] Pending
helpers_test.go:353: "busybox" [b0d83236-88f1-445f-bae3-ef9228543c6b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [b0d83236-88f1-445f-bae3-ef9228543c6b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.005029244s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-875470 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-319340 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-319340 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-319340 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-319340 --alsologtostderr -v=3: (1m30.357033882s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-875470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-875470 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (85.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-875470 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-875470 --alsologtostderr -v=3: (1m25.987848566s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (85.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-229714 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c5960835-127e-435c-b5c8-c8d446ac9e85] Pending
helpers_test.go:353: "busybox" [c5960835-127e-435c-b5c8-c8d446ac9e85] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [c5960835-127e-435c-b5c8-c8d446ac9e85] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003161729s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-229714 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-229714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-229714 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (83.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-229714 --alsologtostderr -v=3
E1217 09:37:25.196281  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/auto-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:25.202794  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/auto-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:25.214261  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/auto-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:25.235706  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/auto-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:25.277658  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/auto-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:25.359062  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/auto-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:25.520793  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/auto-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:25.842692  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/auto-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:26.484128  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/auto-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:27.766272  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/auto-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:30.328165  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/auto-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:35.450465  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/auto-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-229714 --alsologtostderr -v=3: (1m23.925441472s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (83.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-368667 -n old-k8s-version-368667
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-368667 -n old-k8s-version-368667: exit status 7 (59.997396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-368667 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (53.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-368667 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1217 09:37:45.365793  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/addons-102582/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:45.692371  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/auto-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:51.404156  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/kindnet-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:51.410502  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/kindnet-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:51.421851  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/kindnet-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:51.443223  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/kindnet-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:51.484609  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/kindnet-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:51.566127  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/kindnet-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:51.727811  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/kindnet-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:52.049586  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/kindnet-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:52.691671  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/kindnet-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:53.973501  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/kindnet-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:37:56.535620  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/kindnet-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:38:01.657013  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/kindnet-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:38:06.174125  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/auto-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:38:11.899047  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/kindnet-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-368667 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (53.096999978s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-368667 -n old-k8s-version-368667
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (53.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-319340 -n no-preload-319340
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-319340 -n no-preload-319340: exit status 7 (62.344817ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-319340 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (86.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-319340 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-319340 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (1m25.874867101s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-319340 -n no-preload-319340
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (86.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-875470 -n embed-certs-875470
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-875470 -n embed-certs-875470: exit status 7 (70.943972ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-875470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (60.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-875470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
E1217 09:38:32.380803  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/kindnet-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-875470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (59.659140708s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-875470 -n embed-certs-875470
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (60.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-pztc4" [7fc7abd2-0eaf-444a-828c-6d9689d20733] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005316469s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-pztc4" [7fc7abd2-0eaf-444a-828c-6d9689d20733] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004752514s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-368667 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-229714 -n default-k8s-diff-port-229714
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-229714 -n default-k8s-diff-port-229714: exit status 7 (70.773281ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-229714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (70.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-229714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
E1217 09:38:47.135792  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/auto-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-229714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (1m10.401043343s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-229714 -n default-k8s-diff-port-229714
E1217 09:39:57.429709  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/calico-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (70.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-368667 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-368667 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-368667 -n old-k8s-version-368667
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-368667 -n old-k8s-version-368667: exit status 2 (260.266836ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-368667 -n old-k8s-version-368667
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-368667 -n old-k8s-version-368667: exit status 2 (264.59205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-368667 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-368667 -n old-k8s-version-368667
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-368667 -n old-k8s-version-368667
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (69.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-165775 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1217 09:39:13.342834  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/kindnet-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:16.411024  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:16.449829  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/calico-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:16.456843  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/calico-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:16.468407  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/calico-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:16.490610  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/calico-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:16.532405  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/calico-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:16.613999  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/calico-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:16.775857  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/calico-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:17.097209  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/calico-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:17.739254  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/calico-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:19.020831  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/calico-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:21.583205  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/calico-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:26.705017  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/calico-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-165775 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (1m9.894162497s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (69.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ftx4b" [b1be2c8c-4223-42db-b4b1-d2c78c59df7c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ftx4b" [b1be2c8c-4223-42db-b4b1-d2c78c59df7c] Running
E1217 09:39:36.946547  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/calico-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.005595902s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ftx4b" [b1be2c8c-4223-42db-b4b1-d2c78c59df7c] Running
E1217 09:39:39.555774  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/custom-flannel-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:39.562397  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/custom-flannel-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:39.574175  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/custom-flannel-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:39.595662  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/custom-flannel-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:39.637178  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/custom-flannel-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:39.718763  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/custom-flannel-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:39.880571  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/custom-flannel-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:40.202182  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/custom-flannel-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:40.844297  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/custom-flannel-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:39:42.126675  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/custom-flannel-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00559832s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-875470 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-875470 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-875470 --alsologtostderr -v=1
E1217 09:39:44.688339  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/custom-flannel-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-875470 --alsologtostderr -v=1: (1.314557518s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-875470 -n embed-certs-875470
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-875470 -n embed-certs-875470: exit status 2 (264.288831ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-875470 -n embed-certs-875470
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-875470 -n embed-certs-875470: exit status 2 (242.953175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-875470 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-875470 -n embed-certs-875470
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-875470 -n embed-certs-875470
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.42s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-885802 ssh "df -t ext4 /data | grep /data"
E1217 09:39:50.888464  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/enable-default-cni-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//data (0.22s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-885802 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.22s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.23s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-885802 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.23s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.23s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-885802 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.23s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-885802 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.22s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.25s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-885802 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
E1217 09:39:51.529962  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/enable-default-cni-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.25s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.23s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-885802 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-rpwv7" [1766cab5-d317-480e-9f32-6ba2079a4092] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-rpwv7" [1766cab5-d317-480e-9f32-6ba2079a4092] Running
E1217 09:40:00.052388  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/custom-flannel-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 09:40:00.495715  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/enable-default-cni-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.005546405s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.21s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-885802 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   iso_version: v1.37.0-1765846775-22141
iso_test.go:118:   kicbase_version: v0.0.48-1765661130-22141
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: 1d20c337b4b256c51c2d46553500e8ea625f1d01
--- PASS: TestISOImage/VersionJSON (0.21s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.24s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-885802 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.24s)
E1217 09:39:55.373921  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/enable-default-cni-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-l5n9p" [c6388651-5e5d-4969-9116-9125803aa50e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005498508s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-165775 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-165775 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.16900899s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-l5n9p" [c6388651-5e5d-4969-9116-9125803aa50e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004115132s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-229714 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-165775 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-165775 --alsologtostderr -v=3: (7.197355972s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-rpwv7" [1766cab5-d317-480e-9f32-6ba2079a4092] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004278906s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-319340 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E1217 09:40:09.057890  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/auto-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-229714 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-229714 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-229714 -n default-k8s-diff-port-229714
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-229714 -n default-k8s-diff-port-229714: exit status 2 (246.025834ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-229714 -n default-k8s-diff-port-229714
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-229714 -n default-k8s-diff-port-229714: exit status 2 (259.407556ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-229714 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-229714 -n default-k8s-diff-port-229714
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-229714 -n default-k8s-diff-port-229714
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-319340 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-319340 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-319340 --alsologtostderr -v=1: (1.002921212s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-319340 -n no-preload-319340
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-319340 -n no-preload-319340: exit status 2 (248.581217ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-319340 -n no-preload-319340
E1217 09:40:10.737816  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/enable-default-cni-960765/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-319340 -n no-preload-319340: exit status 2 (248.411187ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-319340 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-319340 -n no-preload-319340
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-319340 -n no-preload-319340
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-165775 -n newest-cni-165775
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-165775 -n newest-cni-165775: exit status 7 (78.970919ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-165775 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-165775 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-165775 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (30.445607658s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-165775 -n newest-cni-165775
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-165775 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-165775 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-165775 -n newest-cni-165775
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-165775 -n newest-cni-165775: exit status 2 (203.573108ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-165775 -n newest-cni-165775
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-165775 -n newest-cni-165775: exit status 2 (216.995951ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-165775 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-165775 -n newest-cni-165775
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-165775 -n newest-cni-165775
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.22s)

                                                
                                    

Test skip (52/431)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.31
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
218 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0.01
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService 0.01
221 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 0.01
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
360 TestNetworkPlugins/group/kubenet 4.25
370 TestNetworkPlugins/group/cilium 4.44
378 TestStartStop/group/disable-driver-mounts 0.24
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-102582 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-960765 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-960765

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-960765

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-960765

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-960765

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-960765

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-960765

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-960765

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-960765

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-960765

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-960765

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-960765

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-960765" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-960765" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22182-893359/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 09:27:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.161:8443
name: force-systemd-env-173918
contexts:
- context:
cluster: force-systemd-env-173918
extensions:
- extension:
last-update: Wed, 17 Dec 2025 09:27:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: force-systemd-env-173918
name: force-systemd-env-173918
current-context: force-systemd-env-173918
kind: Config
users:
- name: force-systemd-env-173918
user:
client-certificate: /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/force-systemd-env-173918/client.crt
client-key: /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/force-systemd-env-173918/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-960765

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-960765"

                                                
                                                
----------------------- debugLogs end: kubenet-960765 [took: 3.989815114s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-960765" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-960765
--- SKIP: TestNetworkPlugins/group/kubenet (4.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E1217 09:27:19.479948  897277 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-893359/.minikube/profiles/functional-452472/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:615: 
----------------------- debugLogs start: cilium-960765 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-960765

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-960765

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-960765

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-960765

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-960765

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-960765

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-960765

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-960765

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-960765

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-960765

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-960765

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-960765" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-960765

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-960765

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-960765

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-960765

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-960765" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-960765" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-960765

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-960765" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-960765"

                                                
                                                
----------------------- debugLogs end: cilium-960765 [took: 4.233724233s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-960765" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-960765
--- SKIP: TestNetworkPlugins/group/cilium (4.44s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-126237" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-126237
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard